• Title/Summary/Keyword: Korean vowel

Search Result 794, Processing Time 0.019 seconds

Quantitative Analysis of Voice Quality after Radiation Therapy for Stage T1a Glottic Carcinoma (T1a 병기 성문암의 방사선 치료 후 음성에 관한 연구)

  • Lee Joon-Kyoo;Chung Woong-Gi
    • Radiation Oncology Journal
    • /
    • v.23 no.1
    • /
    • pp.17-21
    • /
    • 2005
  • Purpose : To evaluate the voices of irradiated patients with early glottic carcinoma and to compare these with the voices of healthy volunteers. Materials and Methods : The voice samples (sustained vowel) of seventeen male patients who had been irradiated for T1a glottic squamous carcinoma at least 1 year prior to the study were analyzed with objective voice analyzer (acoustic voice analysis, aerodynamic test, and videostroboscopic analysis) and compared with those of a normal group of twenty age- and sex-matched volunteers. Average fundamental frequency, jitter, shimmer, and noise-to-harmonic ratio were obtained for acoustic voice analysis. Maximal phonation time, mean flow rate, intensity, subglottic pressure, glottal resistance, glottal efficiency, and glottal power were obtained for aerodynamic test. Results : The irradiated group presented higher values of shimmer in acoustic voice analysis. There was no significant difference between two groups in other parameters. Conclusion : In this study all the objective voice parameters except shimmer were no4 significantly different between the irradiated group and the control group. These results suggest that the voice quality is minimally affected by radiation therapy for 71 a glottic carcinoma.

An Efficient Character Image Enhancement and Region Segmentation Using Watershed Transformation (Watershed 변환을 이용한 효율적인 문자 영상 향상 및 영역 분할)

  • Choi, Young-Kyoo;Rhee, Sang-Burm
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.481-490
    • /
    • 2002
  • Off-line handwritten character recognition is in difficulty of incomplete preprocessing because it has not dynamic information has various handwriting, extreme overlap of the consonant and vowel and many error image of stroke. Consequently off-line handwritten character recognition needs to study about preprocessing of various methods such as binarization and thinning. This paper considers running time of watershed algorithm and the quality of resulting image as preprocessing for off-line handwritten Korean character recognition. So it proposes application of effective watershed algorithm for segmentation of character region and background region in gray level character image and segmentation function for binarization by extracted watershed image. Besides it proposes thinning methods that effectively extracts skeleton through conditional test mask considering routing time and quality of skeleton, estimates efficiency of existing methods and this paper's methods as running time and quality. Average execution time on the previous method was 2.16 second and on this paper method was 1.72 second. We prove that this paper's method removed noise effectively with overlap stroke as compared with the previous method.

Comparisons of voice quality parameter values measured with MDVP, Praat, and TF32 (MDVP, Praat, TF32에 따른 음향학적 측정치에 대한 비교)

  • Ko, Hye-Ju;Woo, Mee-Ryung;Choi, Yaelin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.73-83
    • /
    • 2020
  • Measured values may differ between Multi-Dimensional Voice Program (MDVP), Praat, and Time-Frequency Analysis software (TF32), all of which are widely used in voice quality analysis, due to differences in the algorithms used in each analyzer. Therefore, this study aimed to compare the values of parameters of normal voice measured with each analyzer. After tokens of the vowel sound /a/ were collected from 35 normal adult subjects (19 male and 16 female), they were analyzed with MDVP, Praat, and TF32. The mean values obtained from Praat for jitter variables (J local, J abs, J rap, and J ppq), shimmer variables (S local, S dB, and S apq), and noise-to-harmonics ratio (NHR) were significantly lower than those from MDVP in both males and females (p<.01). The mean values of J local, J abs, and S local were significantly lower in the order MDVP, Praat, and TF32 in both genders. In conclusion, the measured values differed across voice analyzers due to the differences in the algorithms each analyzer uses. Therefore, it is important for clinicians to analyze pathologic voice after understanding the normal criteria used by each analyzer when they use a voice analyzer in clinical practice.

Hangul Bitmap Data Compression Embedded in TrueType Font (트루타입 폰트에 내장된 한글 비트맵 데이타의 압축)

  • Han Joo-Hyun;Jeong Geun-Ho;Choi Jae-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.6
    • /
    • pp.580-587
    • /
    • 2006
  • As PDA, IMT-2000, and e-Book are developed and popular in these days, the number of users who use these products has been increasing. However, available memory size of these machines is still smaller than that of desktop PCs. In these products, TrueType fonts have been increased in demand because the number of users who want to use good quality fonts has increased, and TrueType fonts are of great use in Windows CE products. However, TrueType fonts take a large portion of available device memory, considering the small memory sizes of mobile devices. Therefore, it is required to reduce the size of TrueType fonts. In this paper, two-phase compression techniques are presented for the purpose of reducing the sire of hangul bitmap data embedded in TrueType fonts. In the first step, each character in bitmap is divided into initial consonant, medial vowel, and final consonant, respectively, then the character is recomposed into the composite bitmap. In the second phase, if any two consonants or vowels are determined to be the same, one of them is removed. The TrueType embedded bitmaps in Hangeul Wanseong (pre-composed) and Hangul Johab (pre-combined) are used in compression. By using our compression techniques, the compression rates of embedded bitmap data for TrueType fonts can be reduced around 35% in Wanseong font, and 7% in Johab font. Consequently, the compression rate of total TrueType Wanseong font is about 9.26%.

Laryngeal height and voice characteristics in children with autism spectrum disorders (자폐스펙트럼장애 아동의 후두 높이 및 음성 특성)

  • Lee, Jung-Hun;Kim, Go-Woon;Kim, Seong-Tae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.91-101
    • /
    • 2021
  • The purpose of this study was to investigate laryngeal characteristics in children with autism spectrum disorders (ASD). A total of 50 children participated, including eight children aged 2 to 4 years old diagnosed with ASD and 42 normal controls at the same age. All children recorded X-ray images of the midsagittal plane of the cervical spine and larynx, and compared the laryngeal positions of ASD and control. In addition, samples of children with vowel prolongation were collected and analyzed for acoustic parameters. X-rays showed that the height of the hyoid bone in the normal group was the lowest at 3 years of age, and ascended at 4 years of age. Nevertheless, the distance from the external acoustic meatus to the hyoid bone was longest at age 4. 4-year-olds with explosive language development showed laryngeal height elevation and anteriorization. In contrast, the hyoid height of the ASD group of all ages was lower than that of the control group, and there was no difference in the hyoid position between the ages. As a result of acoustic evaluation, PFR, vFo, and vAm were significantly higher ASD than control. Low laryngeal height of ASD children may be associated with delayed language development. PFR, vFo, and vAm seem to be voice markers showing the difference between normal and ASD children.

A preliminary study of acoustic measures in male musical theater students by laryngeal height (뮤지컬 전공 남학생에서 후두 높이에 따른 음향학적 측정치에 대한 예비 연구)

  • Lee, Kwang Yong;Lee, Seung Jin
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.55-65
    • /
    • 2022
  • This study aimed to compare acoustic measurements by the high, middle, and low laryngeal heights of male musical theater students. Furthermore, the correlation between the relative height of the larynx and the acoustic measurements was examined, along with the predictability of the relative height (vertical position) of the larynx from acoustic measurements. The participants included five male students majoring in musical theater singing, and acoustic analysis was performed by having them produce the /a/ vowel 10 times each at the laryngeal positions of high, middle, and low. The relative vertical positions of the laryngeal prominence in each position were measured based on the resting position. Results indicated that the relative position of the larynx varied significantly according to laryngeal height, such that as the larynx descended, the first three formant frequencies decreased while the spectral energy at the same frequencies increased. Formant frequencies showed a weak to moderate positive correlation with the relative height of the larynx, while the spectral energy showed a moderate negative correlation. The relative height of the larynx was predicted by eight acoustic measures (adjusted R2 = .829). In conclusion, the predictability of the relative height of the larynx was partially confirmed in a non-invasive manner.

Comparison of voice range profiles of modal and falsetto register in dysphonic and non-dysphonic adult women (음성장애 성인 여성과 정상음성 성인 여성 간 진성구와 가성구의 음성범위프로파일 비교)

  • Jaeock Kim;Seung Jin Lee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.67-75
    • /
    • 2022
  • This study compared voice range profiles (VRPs) of modal and falsetto register in 53 dysphonic and 53 non-dysphonic adult women with gliding vowel /a/'. The results shows that maximum fundamental frequency (F0MAX), maximum intensity (IMAX), F0 range (F0RANGE), and intensity range (IRANGE) are lower in the dysphonic group than in the non-dysphonic group. F0MAX and F0RANGE are significantly higher in falsetto register than modal register in both groups. IMAX and IRANGE are significantly higher in falsetto register in the non-dysphonic group, but those are not different between two registers in the dysphonic group. There was no statistically significant difference in minimum F0 (F0MIN) and minimum intensity (IMIN) between the two groups. Modal-falsetto register transition occurred at 378.86 Hz (F4#) in the dysphonic group and 557.79 Hz (C5#) in the non-dysphonic group, which was significantly lower in the dysphonic group. It can be seen that both modal and falsetto registers in dysphonic adult women are reduced compared to non-dysphoinc adult women, indicating that the vocal folds of dysphonic adult women are not easy to vibrate in high pitches. The results of this study would be the basic data for understanding the acoustic features of voice disorders.

Automatic detection and severity prediction of chronic kidney disease using machine learning classifiers (머신러닝 분류기를 사용한 만성콩팥병 자동 진단 및 중증도 예측 연구)

  • Jihyun Mun;Sunhee Kim;Myeong Ju Kim;Jiwon Ryu;Sejoong Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.45-56
    • /
    • 2022
  • This paper proposes an optimal methodology for automatically diagnosing and predicting the severity of the chronic kidney disease (CKD) using patients' utterances. In patients with CKD, the voice changes due to the weakening of respiratory and laryngeal muscles and vocal fold edema. Previous studies have phonetically analyzed the voices of patients with CKD, but no studies have been conducted to classify the voices of patients. In this paper, the utterances of patients with CKD were classified using the variety of utterance types (sustained vowel, sentence, general sentence), the feature sets [handcrafted features, extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS), CNN extracted features], and the classifiers (SVM, XGBoost). Total of 1,523 utterances which are 3 hours, 26 minutes, and 25 seconds long, are used. F1-score of 0.93 for automatically diagnosing a disease, 0.89 for a 3-classes problem, and 0.84 for a 5-classes problem were achieved. The highest performance was obtained when the combination of general sentence utterances, handcrafted feature set, and XGBoost was used. The result suggests that a general sentence utterance that can reflect all speakers' speech characteristics and an appropriate feature set extracted from there are adequate for the automatic classification of CKD patients' utterances.

Cross-sectional perception studies of children's monosyllabic word by naive listeners (일반 청자의 아동 발화 단음절에 대한 교차 지각 분석)

  • Ha, Seunghee;So, Jungmin;Yoon, Tae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.21-28
    • /
    • 2022
  • Previous studies have provided important findings on children's speech production development. They have revealed that essentially all aspects of children's speech shift toward adult-like characteristics over time. Nevertheless, few studies have examined the perceptual aspects of children's speech tokens, as perceived by naive adult listeners. To fill the gap between children's production and adults' perception, we conducted cross-sectional perceptual studies of monosyllabic words produced by children aged two to six years. Monosyllabic words in the consonant-vowel-consonant form were extracted from children's speech samples and presented aurally to five listener groups (20 listeners in total). Generally, the agreement rate between children's production of target words and adult listeners' responses increases with age. The perceptual responses to tokens produced by two-year old children induced the largest discrepancies and the responses to words produced by six years olds agreed the most. Further analyses were conducted to identify the sources of disagreement, including the types of segments and syllable structure. This study makes an important contribution to our understanding of the development and perception of children's speech across age groups.

Change in acoustic characteristics of voice quality and speech fluency with aging (노화에 따른 음질과 구어 유창성의 음향학적 특성 변화)

  • Hee-June Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.45-51
    • /
    • 2023
  • Voice issues such as voice weakness that arise with age can have social and emotional impacts, potentially leading to feelings of isolation and depression. This study aimed to investigate the changes in acoustic characteristics resulting from aging, focusing on voice quality and spoken fluency. To this end, tasks involving sustained vowel phonation and paragraph reading were recorded for 20 elderly and 20 young participants. Voice-quality-related variables, including F0, jitter, shimmer, and Cepstral Peak Prominence (CPP) values, were analyzed along with speech-fluency-related variables, such as average syllable duration (ASD), articulation rate (AR), and speech rate (SR). The results showed that in voice quality-related measurements, F0 was higher for the elderly and voice quality was diminished, as indicated by increased jitter, shimmer, and lower CPP levels. Speech fluency analysis also demonstrated that the elderly spoke more slowly, as indicated by all ASD, AR, and SR measurements. Correlation analysis between voice quality and speech fluency showed a significant relationship between shimmer and CPP values and between ASD and SR values. This suggests that changes in spoken fluency can be identified early by measuring the variations in voice quality. This study further highlights the reciprocal relationship between voice quality and spoken fluency, emphasizing that deterioration in one can affect the other.