• Title/Summary/Keyword: 한국어 자음

Search Result 125, Processing Time 0.016 seconds

A COMPUTER ANALYSIS ON THE KOREAN CONSONANT SOUND DISTORTION IN RELATION TO THE PALATAL PLATE THICKNESS -Dentoalveolar and hard palatal consonant- (구개상의 두께에 따른 한국어 자음의 발음 변화에 관한 컴퓨터 분석 - 치조음, 경구개음-)

  • Woo, Yi-Hyung;Choi, Dae-Kyun;Choi, Boo-Byung;Park, Nam-Soo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.25 no.1
    • /
    • pp.71-94
    • /
    • 1987
  • This study was carried out to investigate the sound distortion following the alternation of the palatal plate thickness. For this study, 2 healthy male subjects (24-year-old) were selected. Born in Seoul, they both spoke Seoul dialect. First, their sounds of /na(나)/, /da(다)/, /1a(라)/, /ja(자)/, /cha(차)/, /ta(타)/, without inserting plates were recorded, and then the sounds with palatal plates of different thickness were recorded, successively. The plate was fabricated in 3 types, each palatal thickness being 1.0mm, 2.5mm, dentoalveolar portion 2.5mm, other residual portion was 1.0mm, successively. Each type plates named B, C, D-type, in succession. Series of analysis were administered through Computer(16 bit) to analyze the sound distortions. These experiments were analyzed by the LPC (without weighting, pre-weighting, post-weighting) of the consonants, vowels portion, formant frequency of the vowels and word duration of the consonants. The findings led to the following conclusions: 1. There was no correlation of the distortion rate on the 2 informants. 2. Generally, vowels were not affected by the palatal plate thickness in the formant analysis, however, more distortion was detected in the LPC analysis, especially C, D-type plates. 3. Consonants distortion was more evident in the C, D-type plate. 4. The second formant was most disturbed and reduced in the all consonants with insertion of the palatal plate, especially C, D-type plate. 5. Word duration was shortened in the plate inserted(except /ja/, /cha/), especially C, D-type. 6. It was found that dentoalveolar, hard palatal sounds were severely distorted in plate inserted, and they were mainly affected by the dentoalveolar portion thickness. 7. There was correlation between palatal thickness and consonants quality.

  • PDF

The Study on the Characteristics of Korean Stop Consonants (한국어 파열자음의 특성에 관한 연구)

  • 서동일;표화영;강성석;최홍식
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.8 no.2
    • /
    • pp.217-224
    • /
    • 1997
  • The present study was performed to investigate the voice onset time(VOT) of Korean stop consonants as the expanded research of Pyo and Choi(1996) : the intensity, and the air flow rate of Korean stops as the preliminary study f3r the classical singing training. Nine Korean stops(/P, P', $P^{h}$/, /t, t', $t^{h}$/, /k, k', $k^{h}$/) and a vowel /a/ were used as speech materials. CV and VCV syllable patterns were used for VOT measurement, and CV pattern was used for intensity and air flow rate measurement. Five males and five females pronounced the speech tasks with comfortable pitch and intensity : VOT, intensity, and air flow rate were measured. As results, the prevocalic stop consonants showed bilabials, the shortest VOT and velars, the longest one, except the unaspirated stops which showed the shortest was velar /k'/, and the alveolar /t'/ was the longest. Considering the tensity, heavily aspirated stops showed the longest, and the unaspirated, the shortest. Also the intervocalic stops showed similar results with the prevocalic stops, except the slightly aspirated stops which showed alveolar sound was the longest, and the bilabials, which showed the shortest was the slightly aspirated /p/, unlike the prevocalic stops, the unaspirated /p'/ the shortest. All of prevocalic stops showed the highest air flow rate in heavily aspirated stops, the second, thee slightly aspirated ones, and the lowest was the unaspirated stops. And as a whole, bilabials were the highest, and velars, the lowest, except in the heavily aspirated stops, which was the alveolar sound, the lowest. In the dimension of intensity, the unaspirated and bilabials were the highest, and the heavily aspirated and velars were e lowest, except the slightly aspirated stops, which were the bilabials the lowest, and the alveolars the highest.

  • PDF

Korean Phoneme Recognition Using Self-Organizing Feature Map (SOFM 신경회로망을 이용한 한국어 음소 인식)

  • Jeon, Yong-Koo;Yang, Jin-Woo;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.101-112
    • /
    • 1995
  • In order to construct a feature map-based phoneme classification system for speech recognition, two procedures are usually required. One is clustering and the other is labeling. In this paper, we present a phoneme classification system based on the Kohonen's Self-Organizing Feature Map (SOFM) for clusterer and labeler. It is known that the SOFM performs self-organizing process by which optimal local topographical mapping of the signal space and yields a reasonably high accuracy in recognition tasks. Consequently, SOFM can effectively be applied to the recognition of phonemes. Besides to improve the performance of the phoneme classification system, we propose the learning algorithm combined with the classical K-mans clustering algorithm in fine-tuning stage. In order to evaluate the performance of the proposed phoneme classification algorithm, we first use totaly 43 phonemes which construct six intra-class feature maps for six different phoneme classes. From the speaker-dependent phoneme classification tests using these six feature maps, we obtain recognition rate of $87.2\%$ and confirm that the proposed algorithm is an efficient method for improvement of recognition performance and convergence speed.

  • PDF

Physiologic Phonetics for Korean Stop Production (한국어 자음생성의 생리음성학적 특성)

  • Hong, Ki-Hwan;Yang, Yoon-Soo
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.17 no.2
    • /
    • pp.89-97
    • /
    • 2006
  • The stop consonants in Korean are classified into three types according to the manner of articulation as unaspirated (UA), slightly aspirated (SA) and heavily aspirated (HA) stops. Both the UA and the HA types are always voiceless in any environment. Generally, the voice onset time (VOT) could be measured spectrographically from release of consonant burst to onset of following vowel. The VOT of the UA type is within 20 msec of the burst, and about 40-50 msec in the SA and 50-70 msec in the HA. There have been many efforts to clarify properties that differentiate these manner categories. Umeda, et $al^{1)}$ studied that the fundamental frequency at voice onset after both the UA and HA consonants was higher than that for the SA consonants, and the voice onset times were longest in the HA followed by the SA and UA. Han, et $al^{2)}$ reported in their speech synthesis and perception studies that the SA and UA stops differed primarily in terms of a gradual versus a relatively rapid intensity build-up of the following vowel after the stop release. Lee, et $al^{3)}$ measured both the intraoral and subglottal air pressure that the subglottal pressure was higher for the HA stop than for the other two stops. They also compared the dynamic pattern of the subglottal pressure slope for the three categories and found that the HA stop showed the most rapid increase in subglottal pressure in the time period immediately before the stop release. $Kagaya^{4)}$ reported fiberscopic and acoustic studies of the Korean stops. He mentioned that the UA type may be characterized by a completely adducted state of the vocal folds, stiffened vocal folds and the abrupt decreasing of the stiffness near the voice onset, while the HA type may be characterized by an extensively abducted state of the vocal folds and a heightened subglottal pressure. On the other hand, none of these positive gestures are observed for the SA type. Hong, et $al^{5)}$ studied electromyographic activity of the thyroarytenoid and posterior cricoarytenoid (PCA) muscles during stop production. He reported a marked and early activation of the PCA muscle associated with a steep reactivation of the thyroarytenoid muscle before voice onset in the production of the HA consonants. For the production of the UA consonants, little or no activation of the PCA muscle and earliest and most marked reactivation of the thyroarytenoid muscle were characteristic. For the SA consonants, he reported a more moderate activation of the PCA muscle than for the UA consonant, and the least and the latest reactivation of the thyroarytenoid muscle. Hong, et $al^{6)}$ studied the observation of the vibratory movements of vocal fold edges in terms of laryngeal gestures according to the different types of stop consonants. The movements of vocal fold edges were evaluated using high speed digital images. EGG signals and acoustic waveforms were also evaluated and related to the vibratory movements of vocal fold edges during stop production.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.