• Title/Summary/Keyword: Phoneme

Search Result 458, Processing Time 0.023 seconds

Speech Recognition of the Korean Vowel 'ㅐ', Based on Time Domain Sequence Patterns (시간 영역 시퀀스 패턴에 기반한 한국어 모음 'ㅐ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.713-720
    • /
    • 2015
  • As computing and network technologies are further developed, communication equipment continues to become smaller, and as a result, mobility is now a predominant feature of current technology. Therefore, demand for speech recognition systems in mobile environments is rapidly increasing. This paper proposes a novel method to recognize the Korean vowel 'ㅐ' as a part of a phoneme-based Korean speech recognition system. The proposed method works by analyzing a sequence of patterns in the time domain instead of the frequency domain, and consequently, its use can markedly reduce computational costs. Three algorithms are presented to detect typical sequence patterns of 'ㅐ', and these are combined to produce the final decision. The results of the experiment show that the proposed method has an accuracy of 89.1% in recognizing the vowel 'ㅐ'.

Detection of Intersection Points of Handwritten Hangul Strokes using Run-length (런 길이를 이용한 필기체 한글 자획의 교점 검출)

  • Jung, Min-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.5
    • /
    • pp.887-894
    • /
    • 2006
  • This paper proposes a new method that detects the intersection points of handwritten Hangul strokes using run-length. The method firstly finds the strokes' width of handwritten Hangul characters using both horizontal and vertical run-lengths, secondly extracts horizontal and vertical strokes of a character utilizing the strokes' width, and finally detects the intersection points of the strokes exploiting horizontal and vertical strokes. The analysis of both the horizontal and the vertical strokes doesn't use the strokes' angles but both the strokes' width and the changes of the run-lengths. The intersection points of the strokes become the candidated parts for phoneme segmentation, which is one of main techniques for off-line handwritten Hangul recognition. The segmented strokes represent the feature for handwritten Hangul recognition.

  • PDF

Phoneme Segmentation Method of Handwrittem Hangul Based on Vowel Structure and Heuristic Rules (모음 구조와 경험적인 규칙을 이용한 필기된 한글의 자소 분리 방법)

  • Gwak, Hu-Geun;Choe, Yeong-U;Jeong, Gyu-Sik
    • The KIPS Transactions:PartB
    • /
    • v.8B no.1
    • /
    • pp.10-19
    • /
    • 2001
  • 기존의 필기된 한글의 자소 분리 방법은 일반적으로 다음과 같은 단점을 가진다 : 1) 자소 분리가 주로 세선화된 이미지에 적용되어 세선화 결과에 크게 의존하며, 2) 자소가 접촉되었을 때 명확한 자소 분리 특징점이 발생하는 단순한 접촉만을 대상으로 자소 분리 방법이 개발되어, 특징점이 없거나 특징점이 틀리게 찾아지는 경우처럼 복잡한 접촉에서는 자소 분리 오류가 쉽게 발생한다는 것이다. 본 논문에서는 이러한 단점을 보완하고자 세선화를 하지 않은 이미지에서 자소 분리를 수행하며, 자소가 접촉된 경우 명확한 분리 특징점이 발생하는 경우뿐만 아니라, 특징점이 없거나 특징점이 틀리게 찾아지는 경우에도 자소 분리를 원활하게 수행할 수 있는 방법을 제안한다. 본 논문에서는 자소의 접촉을 유형별로 나누고 각 유형에 대하여 모음의 구조와 상대적인 위치 정보, 접촉의 형태 및 경험적인 규칙들을 사용하여 자소를 분리한다. 제안된 자소 분리 방법은 다음과 같은 순서로 적용된다 : 1) 입력된 낱자 이미지에서 모음을 추적한다 ; 2) 모음의 관점에서 접촉 후 발생하는 특징점의 추출이 명확한가를 판단한다 ; 3) 각 경우에 대한 접촉 유형을 확인한다 ; 4) 접촉 유형에 따른 자소 분리 방법을 적용한다. 필기된 한글 데이터베이스 PE92를 사용한 분리 실험에서 89.5%의 높은 분리율을 얻어서 제안된 방법의 유효성을 확인할 수 있었다.

  • PDF

SVM-based Utterance Verification Using Various Confidence Measures (다양한 신뢰도 척도를 이용한 SVM 기반 발화검증 연구)

  • Kwon, Suk-Bong;Kim, Hoi-Rin;Kang, Jeom-Ja;Koo, Myong-Wan;Ryu, Chang-Sun
    • MALSORI
    • /
    • no.60
    • /
    • pp.165-180
    • /
    • 2006
  • In this paper, we present several confidence measures (CM) for speech recognition systems to evaluate the reliability of recognition results. We propose heuristic CMs such as mean log-likelihood score, N-best word log-likelihood ratio, likelihood sequence fluctuation and likelihood ratio testing(LRT)-based CMs using several types of anti-models. Furthermore, we propose new algorithms to add weighting terms on phone-level log-likelihood ratio to merge word-level log-likelihood ratios. These weighting terms are computed from the distance between acoustic models and knowledge-based phoneme classifications. LRT-based CMs show better performance than heuristic CMs excessively, and LRT-based CMs using phonetic information show that the relative reduction in equal error rate ranges between $8{\sim}13%$ compared to the baseline LRT-based CMs. We use the support vector machine to fuse several CMs and improve the performance of utterance verification. From our experiments, we know that selection of CMs with low correlation is more effective than CMs with high correlation.

  • PDF

A Study on Reexamination of the syllable errors of nasal consonant ending for Chinese learners in the Korean language study (중국인 학습자 비음 종성 /ㄴ/, /ㅇ/ 음절의 발음 오류 재고 -한·중 음절 유형을 통하여-)

  • Zhang, Jian
    • Journal of Korean language education
    • /
    • v.28 no.1
    • /
    • pp.251-268
    • /
    • 2017
  • This study is based on differences of syllable type between Korean and Chinese language pronunciation. For example, Nasal consonant ending 【n】 and 【${\eta}$】 reside in both Korean and Chinese phonetics simultaneously. However, in experiential training, Chinese learners will make errors in pronunciation of the Korean syllable nasal consonant ending like 【n】 and 【${\eta}$】. In the previous research, analysis of pronunciation errors were often based on the perspective of phonological system and combination of the phoneme rules. However, in this study, the analysis is based on the differences between Korean and Chinese syllables category to indicate the cause of pronunciation errors. The main findings of this study indicated that in the process of pronunciation of Chinese, nasal consonant syllable rime and its 【back】 tongue vowel are combined with each other. However, this rule does not apply in Korean pronunciation. Therefore, the Korean syllabic types like "앤, 응, 옹, 앵, 은, 온, 언" also exist in the Chinese language. When theChinese learners pronounce these types of syllables, the combination of the voweland nasal syllable rime rule will be taken, which will result in pronunciationerrors.

Performance of speech recognition unit considering morphological pronunciation variation (형태소 발음변이를 고려한 음성인식 단위의 성능)

  • Bang, Jeong-Uk;Kim, Sang-Hun;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.111-119
    • /
    • 2018
  • This paper proposes a method to improve speech recognition performance by extracting various pronunciations of the pseudo-morpheme unit from an eojeol unit corpus and generating a new recognition unit considering pronunciation variations. In the proposed method, we first align the pronunciation of the eojeol units and the pseudo-morpheme units, and then expand the pronunciation dictionary by extracting the new pronunciations of the pseudo-morpheme units at the pronunciation of the eojeol units. Then, we propose a new recognition unit that relies on pronunciation by tagging the obtained phoneme symbols according to the pseudo-morpheme units. The proposed units and their extended pronunciations are incorporated into the lexicon and language model of the speech recognizer. Experiments for performance evaluation are performed using the Korean speech recognizer with a trigram language model obtained by a 100 million pseudo-morpheme corpus and an acoustic model trained by a multi-genre broadcast speech data of 445 hours. The proposed method is shown to reduce the word error rate relatively by 13.8% in the news-genre evaluation data and by 4.5% in the total evaluation data.

Acoustic analysis of fricatives in dysarthric speakers with cerebral palsy

  • Hernandez, Abner;Lee, Ho-young;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.23-29
    • /
    • 2019
  • This study acoustically examines the quality of fricatives produced by ten dysarthric speakers with cerebral palsy. Previous similar studies tend to focus only on sibilants, but to obtain a better understanding of how dysarthria affects fricatives we selected a range of samples with different places of articulation and voicing. The Universal Access (UA) Speech database was used to select thirteen words beginning with one of the English fricatives (/f/, /v/, /s/, /z/, /ʃ/, /ð/). The following four measurements were taken for both dysarthric and healthy speakers: phoneme duration, mean spectral peak, variance and skewness. Results show that even speakers with mild dysarthria have significantly longer fricatives and a lower mean spectral peak than healthy speakers. Furthermore, mean spectral peak and variance showed significant group effects for both healthy and dysarthric speakers. Mean spectral peak and variance was also useful for discriminating several places of articulation for both groups. Lastly, spectral measurements displayed important group differences when taking severity into account. These findings show that in general there is a degradation in the production of fricatives for dysarthric speakers, but difference will depend on the severity of dysarthria along with the type of measurement taken.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • v.24 no.3
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Korean Journal of Audiology
    • /
    • v.24 no.3
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.