• Title/Summary/Keyword: 음소 인식

Search Result 302, Processing Time 0.024 seconds

Speech Recognition in Noise Environments Using SPLICE with Phonetic Information (음성학적인 정보를 포함한 SPLICE를 이용한 잡음환경에서의 음성인식)

  • Kim Doo Hee;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.83-86
    • /
    • 2002
  • 훈련과정과 인식과정에서의 주변환경 잡음과 채널 특성 등의 불일치는 음성인식 성능을 급격히 저하시킨다. 이러한 불일치를 보상하기 위해서 켑스트럼 영역에서의 다양한 전처리 방법이 시도되고 있으며 최근에는 stereo 데이터와 잡음 음성의 Gaussian Mixture Model (GMM)을 이용해 보상벡터를 구하는 SPLICE 방법이 좋은 결과를 보이고 있다(1). 기존의 SPLICE가 전체 발성에 대해서 음향학적인 정보만으로 Gaussian 모델을 구하는 반면 본 논문에서는 발성에 해당하는 음소정보를 고려하여 전체 음향 공간을 각 음소에 대해 나누어서 모델링하고 각 음소에 대한 Gaussian 모델과 그 음소에 해당하는 음성데이터만을 이용하여 음소별 보상벡터가 훈련되도록 하였다. 이 경우 보상벡터는 잡음이 각 음소에 미치는 영향을 보다 자세히 나타내게 된다. Aurora 2 데이터베이스를 이용한 실험결과, 제안된 방법이 기존의 SPLICE방법에 비해 성능향상을 보였다.

  • PDF

A Study on the Phoneme Recognition using RBFN (RBFN을 이용한 음소인식에 관한 연구)

  • 안종영
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.88-91
    • /
    • 1995
  • 개층형 신경망은 교사신호들의 학습으로 원하는 입출력간의 매핑을 할 수 있으므로 패턴분류를 위해 사용되어왔다. 본 논문은 계층형 신경망의 일종인 RBFN 중 GPFN 과 PNN으로 한국어 음소인식을 수행하였다. RBFN 의 구조는 계층형 신경망과 유사하나 차이점으로는 은닉층에서 시그모이드 함수, 참조벡터 및 학습알고리듬의 선택이 다르다. 특히 PNN 의 시그모이드 함수는 지수를 포함한 함수들로 대체되며 학습없이 패턴을 분류하므로 계산시간이 빠르게 수행된다. 본 실험에서는 한국어 단음절에서 모음과 자음을 추출하여 음소인식을 수행하였다. 실험 결과 학습과 평가데이타에 의한 인식률은 계층형 신경망과 비교하여 향상 되었으며, Hybrid 구성에 의한 실험에서도 항상된 인식률을 얻을 수 있었다.

  • PDF

Isolated Korean Digits Recognition Using Stochasitc Transition Models With Phoneme-based VQ Codebooks (음소단위 코드북간의 확률적 전이 모델을 이용한 한국어 숫자음 인식에 관한 연구)

  • Choi, Hwan-Jin;Oh, Yung-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 1993.10a
    • /
    • pp.149-157
    • /
    • 1993
  • 음성인식을 위해 다양한 방법들이 제안되어 있다. 본 연구에서는 음소단위 각각의 벡터 양자화된 코드북의 색인을 학습하는 HMM을 이용하여 한국어 숫자음을 대상으로 인식 실험을 수행하였다. 실험결과, 기존의 단어단위 HMM과 음소단위로 이루어진 유한상태기계(FSM)구조의 인식기에 비해 높은 인식율을 보였다.

  • PDF

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.

A Study on Speech Recognition based on Phoneme for Korean Subway Station Names (한국의 지하철역명을 위한 음소 기반의 음성인식에 관한 연구)

  • Kim, Beom-Seung;Kim, Soon-Hyob
    • Journal of the Korean Society for Railway
    • /
    • v.14 no.3
    • /
    • pp.228-233
    • /
    • 2011
  • This paper presented the method about the Implementation of Speech Recognition based on phoneme considering the phonological characteristic for Korean Subway Station Names. The Pronunciation dictionary considering PLU set and phonological variations with four Case in order to select the optimum PLU used for Speech Recognition based on phoneme for Korean Subway Station Names was comprised and the recognition rate was estimated. In the case of the applied PLU, we could know the optimum recognition rate(97.74%) be shown in the triphone model in case of considering the recognition unit division of the initial consonant and final consonant and phonological variations.

Stochastic Pronunciation Lexicon Modeling for Large Vocabulary Continous Speech Recognition (확률 발음사전을 이용한 대어휘 연속음성인식)

  • Yun, Seong-Jin;Choi, Hwan-Jin;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.49-57
    • /
    • 1997
  • In this paper, we propose the stochastic pronunciation lexicon model for large vocabulary continuous speech recognition system. We can regard stochastic lexicon as HMM. This HMM is a stochastic finite state automata consisting of a Markov chain of subword states and each subword state in the baseform has a probability distribution of subword units. In this method, an acoustic representation of a word can be derived automatically from sample sentence utterances and subword unit models. Additionally, the stochastic lexicon is further optimized to the subword model and recognizer. From the experimental result on 3000 word continuous speech recognition, the proposed method reduces word error rate by 23.6% and sentence error rate by 10% compare to methods based on standard phonetic representations of words.

  • PDF

Phoneme Recognition Using Frequency State Neural Network (주파수 상태 신경 회로망을 이용한 음소 인식)

  • Lee, Jun-Mo;Hwang, Yeong-Soo;Kim, Seong-Jong;Shin, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.12-19
    • /
    • 1994
  • This paper reports a new structure for phoneme recognition neural network. The proposed neural network is able to deal with the structure of the frequency bands as well as the temporal structure of phonemic features which used in the conventional TSNN. We trained this neural network using the phonetics (아, 이, 오, ㅅ, ㅊ, ㅍ, ㄱ, ㅇ, ㄹ, ㅁ) and the phoneme recognition of this neural network was a little better than those of conventional TDNN and TSNN using only temporal structure of phonemic features.

  • PDF

A Study on Error Correction Using Phoneme Similarity in Post-Processing of Speech Recognition (음성인식 후처리에서 음소 유사율을 이용한 오류보정에 관한 연구)

  • Han, Dong-Jo;Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.3
    • /
    • pp.77-86
    • /
    • 2007
  • Recently, systems based on speech recognition interface such as telematics terminals are being developed. However, many errors still exist in speech recognition and then studies about error correction are actively conducting. This paper proposes an error correction in post-processing of the speech recognition based on features of Korean phoneme. To support this algorithm, we used the phoneme similarity considering features of Korean phoneme. The phoneme similarity, which is utilized in this paper, rams data by mono-phoneme, and uses MFCC and LPC to extract feature in each Korean phoneme. In addition, the phoneme similarity uses a Bhattacharrya distance measure to get the similarity between one phoneme and the other. By using the phoneme similarity, the error of eo-jeol that may not be morphologically analyzed could be corrected. Also, the syllable recovery and morphological analysis are performed again. The results of the experiment show the improvement of 7.5% and 5.3% for each of MFCC and LPC.

  • PDF

Effects of Orthographic Knowledge and Phonological Awareness on Visual Word Decoding and Encoding in Children Aged 5-8 Years (5~8세 아동의 철자지식과 음운인식이 시각적 단어 해독과 부호화에 미치는 영향)

  • Na, Ye-Ju;Ha, Ji-Wan
    • Journal of Digital Convergence
    • /
    • v.14 no.6
    • /
    • pp.535-546
    • /
    • 2016
  • This study examined the relation among orthographic knowledge, phonological awareness, and visual word decoding and encoding abilities. Children aged 5 to 8 years took letter knowledge test, phoneme-grapheme correspondence test, orthographic representation test(regular word and irregular word representation), phonological awareness test(word, syllable and phoneme awareness), word decoding test(regular word and irregular word reading) and word encoding test(regular word and irregular word dictation). The performances of all tasks were significantly different among groups, and there were positive correlations among the tasks. In the word decoding and encoding tests, the variables with the most predictive power were the letter knowledge ability and the orthographic representation ability. It was found that orthographic knowledge more influenced visual word decoding and encoding skills than phonological awareness at these ages.

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.