• Title/Summary/Keyword: Korean Vowels

Search Result 618, Processing Time 0.03 seconds

A comparison of the perceptual-auditory voice quality evaluation (GRBAS) and voice-related quality of life (K-VRQOL) according to choir type of elderly women choir members (여성 노인 합창단원의 합창단 유형에 따른 청지각적 음성평가(GRBAS) 및 음성관련 삶의 질(K-VRQOL) 비교)

  • Lee, Hyeonjung;Kang, Binna;Kim, Soo Ji
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.51-61
    • /
    • 2020
  • The purpose of this study is to compare voice characteristics and voice-related quality of life (K-VRQOL) of the elderly female choir members using perceptual-auditory voice quality evaluation (GRBAS) and K-VRQOL scales. The participants were 77 women over 60 years old who were actively engaged in the choir in either Seoul or Busan. There are two kinds of choirs that indicate different engagement levels: regular choir and church choir. The perceptual-auditory vocal quality evaluation was listened to by / a / vowels and were graded by experts using the GRBAS scale. As a result, when comparing the differences between groups, the elderly female participants of the regular choir showed higher satisfaction in speech using the subjective speech recognition level than the elderly female members who performed in the church choir. In addition, the analysis showed that the satisfaction level was high in the physical function area of the K-VRQOL scale. This study confirmed that choral activities could yield positive results not only in terms of improving voice function in old age, but also to improve the subjective perception level of voice use, thus suggesting the necessity of systematic music programs to improve voices that are aging.

Hangul Bitmap Data Compression Embedded in TrueType Font (트루타입 폰트에 내장된 한글 비트맵 데이타의 압축)

  • Han Joo-Hyun;Jeong Geun-Ho;Choi Jae-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.6
    • /
    • pp.580-587
    • /
    • 2006
  • As PDA, IMT-2000, and e-Book are developed and popular in these days, the number of users who use these products has been increasing. However, available memory size of these machines is still smaller than that of desktop PCs. In these products, TrueType fonts have been increased in demand because the number of users who want to use good quality fonts has increased, and TrueType fonts are of great use in Windows CE products. However, TrueType fonts take a large portion of available device memory, considering the small memory sizes of mobile devices. Therefore, it is required to reduce the size of TrueType fonts. In this paper, two-phase compression techniques are presented for the purpose of reducing the sire of hangul bitmap data embedded in TrueType fonts. In the first step, each character in bitmap is divided into initial consonant, medial vowel, and final consonant, respectively, then the character is recomposed into the composite bitmap. In the second phase, if any two consonants or vowels are determined to be the same, one of them is removed. The TrueType embedded bitmaps in Hangeul Wanseong (pre-composed) and Hangul Johab (pre-combined) are used in compression. By using our compression techniques, the compression rates of embedded bitmap data for TrueType fonts can be reduced around 35% in Wanseong font, and 7% in Johab font. Consequently, the compression rate of total TrueType Wanseong font is about 9.26%.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

A study of /l/ velarization in American English based on the Buckeye Corpus (벅아이 코퍼스를 이용한 미국 영어의 /l/ 연구개음화 연구)

  • Sa, Jae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.19-25
    • /
    • 2021
  • It has been widely recognized that there are two varieties of lateral liquid /l/, which are light /l/ (a non-velarized allophone) and dark /l/ (a velarized allophone). However, this categorical view has been challenged in recent studies, both on articulatory and acoustic aspects. The purpose of this study is to investigate whether to consider /l/ velarization as a continuum in American English and provide supporting data. A spontaneous American English speech database called the Buckeye Speech Corpus was used for the material. The formant frequencies of /l/ in each syllable position were measured and analyzed statistically. The formant frequencies of /l/ in each syllable position, especially F2 values, were significantly different from each other. The results showed that there were other significantly different varieties of /l/ in American English, which support the continuum view on /l/ velarization. Regarding the effect of the adjacent vowel, the backness of the adjacent vowels was shown to affect the degree of /l/ velarization, regardless of the syllable position of the lateral liquid. This result will help provide a solid ground for the continuum view.

A study on English vowel duration with respect to the various characteristics of the following consonant (후행하는 자음의 여러 특성에 따른 영어 모음 길이에 관한 연구)

  • Yoo, Hyunbin;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.1-11
    • /
    • 2022
  • The purpose of this study is to investigate the difference of vowel duration due to the voicing of word-final consonants in English and its relation to the types of word-final consonants (stops vs. fricatives), (partial) devoicing, and stop releasing. Addtionally, this study attempts to interpret the findings from the functional view that the vowels before voiced consonants are produced with a longer duration in order to enhance the salience of the voicing of word-final consonants. This study conducted a recording experiment with English native speakers, and measured the vowel duration, the degree of (partial) devoicing of word-final voiced consonants and the release of word-final stops. First, the results showed that the ratio of the duration difference was not influenced by the types of word-final consonants. Second, it was revealed that the higher the degree of (partial) devoicing of word-final voiced consonants, the longer vowel duration before word-final voiced consonants, which was compatible with the prediction based on the functional view. Lastly, the ratio of the duration difference was greater when the word-final stops were uttered with the release compared to when uttered without the release, which was not consistent with the functional view. These results suggest that it is not sufficient enough to explain the voicing effect by its function of distinguishing the voicing of word-final consonants.

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Acoustic characteristics of speech-language pathologists related to their subjective vocal fatigue (언어재활사의 주관적 음성피로도와 관련된 음향적 특성)

  • Jeon, Hyewon;Kim, Jiyoun;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.87-101
    • /
    • 2022
  • In addition to administering a questionnaire (J-survey), which questions individuals on subjective vocal fatigue, voice samples were collected before and after speech-language pathology sessions from 50 female speech-language pathologists in their 20s and 30s in the Daejeon and Chungnam areas. We identified significant differences in Korean Vocal Fatigue Index scores between the fatigue and non-fatigue groups, with the most prominent differences in sections one and two. Regarding acoustic phonetic characteristics, both groups showed a pattern in which low-frequency band energy was relatively low, and high-frequency band energy was increased after the treatment sessions. This trend was well reflected in the low-to-high ratio of vowels, slope LTAS, energy in the third formant, and energy in the 4,000-8,000 Hz range. A difference between the groups was observed only in the vowel energy of the low-frequency band (0-4,000 Hz) before treatment, with the non-fatigue group having a higher value than the fatigue group. This characteristic could be interpreted as a result of voice abuse and higher muscle tonus caused by long-term voice work. The perturbation parameter and shimmer local was lowered in the non-fatigue group after treatment, and the noise-to-harmonics ratio (NHR) was lowered in both groups following treatment. The decrease in NHR and the fall of shimmer local could be attributed to vocal cord hypertension, but it could be concluded that the effective voice use of speech-language pathologists also contributed to this effect, especially in the non-fatigue group. In the case of the non-fatigue group, the rhamonics-to-noise ratio increased significantly after treatment, indicating that the harmonic structure was more stable after treatment.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.