• Title/Summary/Keyword: Korean Vowel

Search Result 786, Processing Time 0.021 seconds

Cross-sectional perception studies of children's monosyllabic word by naive listeners (일반 청자의 아동 발화 단음절에 대한 교차 지각 분석)

  • Ha, Seunghee;So, Jungmin;Yoon, Tae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.21-28
    • /
    • 2022
  • Previous studies have provided important findings on children's speech production development. They have revealed that essentially all aspects of children's speech shift toward adult-like characteristics over time. Nevertheless, few studies have examined the perceptual aspects of children's speech tokens, as perceived by naive adult listeners. To fill the gap between children's production and adults' perception, we conducted cross-sectional perceptual studies of monosyllabic words produced by children aged two to six years. Monosyllabic words in the consonant-vowel-consonant form were extracted from children's speech samples and presented aurally to five listener groups (20 listeners in total). Generally, the agreement rate between children's production of target words and adult listeners' responses increases with age. The perceptual responses to tokens produced by two-year old children induced the largest discrepancies and the responses to words produced by six years olds agreed the most. Further analyses were conducted to identify the sources of disagreement, including the types of segments and syllable structure. This study makes an important contribution to our understanding of the development and perception of children's speech across age groups.

Change in acoustic characteristics of voice quality and speech fluency with aging (노화에 따른 음질과 구어 유창성의 음향학적 특성 변화)

  • Hee-June Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.45-51
    • /
    • 2023
  • Voice issues such as voice weakness that arise with age can have social and emotional impacts, potentially leading to feelings of isolation and depression. This study aimed to investigate the changes in acoustic characteristics resulting from aging, focusing on voice quality and spoken fluency. To this end, tasks involving sustained vowel phonation and paragraph reading were recorded for 20 elderly and 20 young participants. Voice-quality-related variables, including F0, jitter, shimmer, and Cepstral Peak Prominence (CPP) values, were analyzed along with speech-fluency-related variables, such as average syllable duration (ASD), articulation rate (AR), and speech rate (SR). The results showed that in voice quality-related measurements, F0 was higher for the elderly and voice quality was diminished, as indicated by increased jitter, shimmer, and lower CPP levels. Speech fluency analysis also demonstrated that the elderly spoke more slowly, as indicated by all ASD, AR, and SR measurements. Correlation analysis between voice quality and speech fluency showed a significant relationship between shimmer and CPP values and between ASD and SR values. This suggests that changes in spoken fluency can be identified early by measuring the variations in voice quality. This study further highlights the reciprocal relationship between voice quality and spoken fluency, emphasizing that deterioration in one can affect the other.

Prosodic Phrasing and Focus in Korea

  • Baek, Judy Yoo-Kyung
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.246-246
    • /
    • 1996
  • Purpose: Some of the properties of the prosodic phrasing and some acoustic and phonological effects of contrastive focus on the tonal pattern of Seoul Korean is explored based on a brief experiment of analyzing the fundamental frequency(=FO) contour of the speech of the author. Data Base and Analysis Procedures: The examples were chosen to contain mostly nasal and liquid consonants, since it is difficult to track down the formants in stops and fricatives during their corresponding consonantal intervals and stops may yield an effect of unwanted increase in the FO value due to their burst into the following vowel. All examples were recorded three times and the spectrum of the most stable repetition was generated, from which the FO contour of each sentence was obtained, the peaks with a value higher than 250Hz being interpreted as a high tone (=H). The result is then discussed within the prosodic hierarchy framework of Selkirk (1986) and compared with the tonal pattern of the Northern Kyungsang dialect of Korean reported in Kenstowicz & Sohn (1996). Prosodic Phrasing: In N.K. Korean, H never appears both on the object and on the verb in a neutral sentence, which indicates the object and the verb form a single Phonological Phrase ($={\phi}$), given that there is only one pitch peak for each $={\phi}$. However, Seoul Korean shows that both the object and the verb have H of their own, indicating that they are not contained in one $={\phi}$. This violates the Optimality constraint of Wrap-XP (=Enclose a lexical head and its arguments in one $={\phi}$), while N.K. Korean obeys the constraint by grouping a VP in a single $={\phi}$. This asymmetry can be resolved through a constraint that favors the separate grouping of each lexical category and is ranked higher than Wrap-XP in Seoul Korean but vice versa in N.K. Korean; $Align-x^{lex}$ (=Align the left edge of a lexical category with that of a $={\phi}$). (1) nuna-ka manll-ll mEk-nIn-ta ('sister-NOM garlic-ACC eat-PRES-DECL') a. (LLH) (LLH) (HLL) ----Seoul Korean b. (LLH) (LLL LHL) ----N.K. Korean Focus and Phrasing: Two major effects of contrastive focus on phonological phrasing are found in Seoul Korean: (a) the peak of an Intonatioanl Phrase (=IP) falls on the focused element; and (b) focus has the effect of deleting all the following prosodic structures. A focused element always attracts the peak of IP, showing an increase of approximately 30Hz compared with the peak of a non-focused IP. When a subject is focused, no H appears either on the object or on the verb and a focused object is never followed by a verb with H. The post-focus deletion of prosodic boundaries is forced through the interaction of StressFocus (=If F is a focus and DF is its semantic domain, the highest prominence in DF will be within F) and Rightmost-IP (=The peak of an IP projects from the rightmost $={\phi}$). First Stress-F requires the peak of IP to fall on the focused element. Then to avoid violating Rightmost-IP, all the boundaries after the focused element should delete, minimizing the number of $={\phi}$'s intervening from the right edge of IP. (2) (omitted) Conclusion: In general, there seems to be no direct alignment constraints between the syntactically focused element and the edge of $={\phi}$ determined in phonology; all the alignment effects come from a single requirement that the peak of IP projects from the rightmost $={\phi}$ as proposed in Truckenbrodt (1995).

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Acoustic characteristics of speech-language pathologists related to their subjective vocal fatigue (언어재활사의 주관적 음성피로도와 관련된 음향적 특성)

  • Jeon, Hyewon;Kim, Jiyoun;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.87-101
    • /
    • 2022
  • In addition to administering a questionnaire (J-survey), which questions individuals on subjective vocal fatigue, voice samples were collected before and after speech-language pathology sessions from 50 female speech-language pathologists in their 20s and 30s in the Daejeon and Chungnam areas. We identified significant differences in Korean Vocal Fatigue Index scores between the fatigue and non-fatigue groups, with the most prominent differences in sections one and two. Regarding acoustic phonetic characteristics, both groups showed a pattern in which low-frequency band energy was relatively low, and high-frequency band energy was increased after the treatment sessions. This trend was well reflected in the low-to-high ratio of vowels, slope LTAS, energy in the third formant, and energy in the 4,000-8,000 Hz range. A difference between the groups was observed only in the vowel energy of the low-frequency band (0-4,000 Hz) before treatment, with the non-fatigue group having a higher value than the fatigue group. This characteristic could be interpreted as a result of voice abuse and higher muscle tonus caused by long-term voice work. The perturbation parameter and shimmer local was lowered in the non-fatigue group after treatment, and the noise-to-harmonics ratio (NHR) was lowered in both groups following treatment. The decrease in NHR and the fall of shimmer local could be attributed to vocal cord hypertension, but it could be concluded that the effective voice use of speech-language pathologists also contributed to this effect, especially in the non-fatigue group. In the case of the non-fatigue group, the rhamonics-to-noise ratio increased significantly after treatment, indicating that the harmonic structure was more stable after treatment.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.