• Title/Summary/Keyword: vowel system

Search Result 142, Processing Time 0.016 seconds

Acoustic Characteristics of Stop Consonant Production in the Motor Speech Disorders (운동성 조음장애에서 폐쇄자음 발성의 음향학적 특성)

  • Hong, Hee-Kyung;Kim, Moon-Jun;Yoon, Jin;Park, Hee-Taek;Hong, Ki-Hwan
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.23 no.1
    • /
    • pp.33-42
    • /
    • 2012
  • Background and Objectives : Dysarthria refers to speech disorder that causes difficulties in speech communication due to paralysis, muscle weakening, and incoordination of speech muscle mechanism caused by damaged central or peripheral nerve system. Pitch, strength and speed are influenced by dysarthria during detonation due to difficulties in muscle control. As evaluation items, alternate motion rate and diadochokinesis have been commonly used, and articulation is also an important evaluation items. The purpose of this study is to find acoustic characteristics on sound production of dysarthria patients. Materials and Methods : Research subjects have been selected as 20 dysarthria patients and 20 subjects for control group, and voice sample was composed of bilabial, alveolar sound, and velar sound in diadochokinetic rate, while consonant articulation test was composed of bilabial plosive, alveolar plosive, velar plosive. Analysis items were composed of 1) speaking rate, energy, articulation time of diadochokinesis, 2) voice onset time (VOT), total duration (TD), vowel duration (VD), hold of plosives. Results and Conclusions : The number of diadochokinetic rate of dysarthria was smaller than control group. Both control group and dysarthria group was highly presented in the order of /t/>/p/>/k/. Minimum energy range per cycle during diadochokinetic rate of dysarthria group was smaller than control group, and presented statistical significance in /p/, /k/, /ptk/. Maximum energy range was larger than control group, and presented statistical significance in /t/, /ptk/. Articulation time, gap, total articulation time during diadochokinetic rate of dysarthria group was longer than control group and presented statistical significance. The articulation time was presented in both control group and dysarthria group in the order of /k/>/t/>/p/, while Gap was presented in the order of /p/>/t/>/k/ for control group and /p/>/k/>/t/ for dysarthria group. VOT, TD, VD regarding plosives of dysarthria group were longer than control group. Hold showed large deviation compared to control group that had appeared due to declined larynx and articulation organ motility.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.