• Title/Summary/Keyword: phoneme HMM

Search Result 62, Processing Time 0.033 seconds

A Study on the Korean Syllable As Recognition Unit (인식 단위로서의 한국어 음절에 대한 연구)

  • Kim, Yu-Jin;Kim, Hoi-Rin;Chung, Jae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.64-72
    • /
    • 1997
  • In this paper, study and experiments are performed for finding recognition unit fit which can be used in large vocabulary recognition system. Specifically, a phoneme that is currently used as recognition unit and a syllable in which Korean is well characterized are selected. From comparisons of recognition experiments, the study is performed whether a syllable can be considered as recognition unit of Korean recognition system. For report of an objective result of the comparison experiment, we collected speech data of a male speaker and processed them by hand-segmentation for phoneme boundary and labeling to construct speech database. And for training and recognition based on HMM, we used HTK (HMM Tool Kit) 2.0 of commercial tool from Entropic Co. to experiment in same condition. We applied two HMM model topologies, 3 emitting state of 5 state and 6 emitting state of 8 state, in Continuous HMM on training of each recognition unit. We also used 3 sets of PBW (Phonetically Balanced Words) and 1 set of POW(Phonetically Optimized Words) for training and another 1 set of PBW for recognition, that is "Speaker Dependent Medium Vocabulary Size Recognition." Experiments result reports that recognition rate is 95.65% in phoneme unit, 94.41% in syllable unit and decoding time of recognition in syllable unit is faster by 25% than in phoneme.

  • PDF

Phoneme Similarity Error Correction System using Bhattacharyya Distance Measurement Method (바타챠랴 거리 측정법을 이용한 음소 유사율 오류 보정 개선 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.6
    • /
    • pp.73-80
    • /
    • 2010
  • Vocabulary recognition system is providing inaccurate vocabulary and similar phoneme recognition due to reduce recognition rate. It's require method of similar phoneme recognition unrecognized and efficient feature extraction process. Therefore in this paper propose phoneme likelihood error correction improvement system using based on phoneme feature Bhattacharyya distance measurement. Phoneme likelihood is monophone training data phoneme using HMM feature extraction method, similar phoneme is induced recognition able to accurate phoneme using Bhattacharyya distance measurement. They are effective recognition rate improvement. System performance comparison as a result of recognition improve represent 1.2%, 97.91% by Euclidean distance measurement and dynamic time warping(DTW) system.

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

Stereo Vision Neural Networks with Competition and Cooperation for Phoneme Recognition

  • Kim, Sung-Ill;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.3-10
    • /
    • 2003
  • This paper describes two kinds of neural networks for stereoscopic vision, which have been applied to an identification of human speech. In speech recognition based on the stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, with, the average phoneme recognition accuracy on the two-layered SVNN was 7.7% higher than the Hidden Markov Model (HMM) recognizer with the structure of a single mixture and three states, and the three-layered was 6.6% higher. Therefore, it was noticed that SVNN outperformed the existing HMM recognizer in phoneme recognition.

Large Scale Voice Dialling using Speaker Adaptation (화자 적응을 이용한 대용량 음성 다이얼링)

  • Kim, Weon-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.335-338
    • /
    • 2010
  • A new method that improves the performance of large scale voice dialling system is presented using speaker adaptation. Since SI (Speaker Independent) based speech recognition system with phoneme HMM uses only the phoneme string of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the mismatch between the input utterance and the SI models. A new method that estimates the phonetic string and adaptation vectors iteratively is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the adaptation vectors. The experiments performed over actual telephone line shows that proposed method shows better performance as compared to the conventional method. with the SI phonetic recognizer.

A Study of Phoneme Modeling for Improvement of Automatic Segmentation Performance (자동 음소 분할 성능 개선을 위한 음소 모델링에 관한 연구)

  • Park Hae Young;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.175-178
    • /
    • 2002
  • 본 논문에서는 Hidden Markov Model(HMM)을 이용하여 corpus 기반 TTS에 사용할 DB를 자동 음소 분할 해주는 시스템을 구현하였다. HMM을 이용해서 음소 분할 할 경우 HMM을 모델링 하는 방법에 따라 많은 성능의 차이가 난다. 따라서 본 논문에서는 HMM 모델링 방법에 따른 몇 가지 실험 및 성능 평가를 하였다. 실험 결과 음성 인식과는 달리 HMM모델링 시 triphone 모델보다 monophone 모델의 성능이 더 우수하였으며, 에너지 기반의 후처리를 통해 성능 향상을 얻을 수 있었다.

  • PDF

Support Vector Machine Based Phoneme Segmentation for Lip Synch Application

  • Lee, Kun-Young;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.193-210
    • /
    • 2004
  • In this paper, we develop a real time lip-synch system that activates 2-D avatar's lip motion in synch with an incoming speech utterance. To realize the 'real time' operation of the system, we contain the processing time by invoking merge and split procedures performing coarse-to-fine phoneme classification. At each stage of phoneme classification, we apply the support vector machine (SVM) to reduce the computational load while retraining the desired accuracy. The coarse-to-fine phoneme classification is accomplished via two stages of feature extraction: first, each speech frame is acoustically analyzed for 3 classes of lip opening using Mel Frequency Cepstral Coefficients (MFCC) as a feature; secondly, each frame is further refined in classification for detailed lip shape using formant information. We implemented the system with 2-D lip animation that shows the effectiveness of the proposed two-stage procedure in accomplishing a real-time lip-synch task. It was observed that the method of using phoneme merging and SVM achieved about twice faster speed in recognition than the method employing the Hidden Markov Model (HMM). A typical latency time per a single frame observed for our method was in the order of 18.22 milliseconds while an HMM method applied under identical conditions resulted about 30.67 milliseconds.

  • PDF

HMM-based Music Identification System for Copyright Protection (저작권 보호를 위한 HMM기반의 음악 식별 시스템)

  • Kim, Hee-Dong;Kim, Do-Hyun;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.63-67
    • /
    • 2009
  • In this paper, in order to protect music copyrights, we propose a music identification system which is scalable to the number of pieces of registered music and robust to signal-level variations of registered music. For its implementation, we define the new concepts of 'music word' and 'music phoneme' as recognition units to construct 'music acoustic models'. Then, with these concepts, we apply the HMM-based framework used in continuous speech recognition to identify the music. Each music file is transformed to a sequence of 39-dimensional vectors. This sequence of vectors is represented as ordered states with Gaussian mixtures. These ordered states are trained using Baum-Welch re-estimation method. Music files with a suspicious copyright are also transformed to a sequence of vectors. Then, the most probable music file is identified using Viterbi algorithm through the music identification network. We implemented a music identification system for 1,000 MP3 music files and tested this system with variations in terms of MP3 bit rate and music speed rate. Our proposed music identification system demonstrates robust performance to signal variations. In addition, scalability of this system is independent of the number of registered music files, since our system is based on HMM method.

  • PDF

Phoneme-based Recognition of Korean Speech Using HMM(Hidden Markov Model) and Genetic Algorithm (HMM과 GA를 이용한 한국어 음성의 음소단위 인식)

  • 박준하;조성원
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.291-295
    • /
    • 1997
  • 현재에 주로 개발되어 상용화가 시작되고 있는 음성인식 시스템의 대부분은 단어인식을 기분으로 하는 시스템으로 적용 단어수를 늘려줌으로서 인식범위를 늘일 수 있으나, 그에 따라 검색해야하는 단어수가 늘어남으로서 전체적인 시스템의 속도 및 성능이 저하되는 경향이 있다. 이러한 단점의 극복을 위하여 본 논문에서는 HMM(Hidden Markov Model)과 GA(Genetic Algorithm)를 이용한 한국어 음성의 음소단위 인식 시스템을 구현하였다. 음성 특징으로는 LPC Cepstrum 계수를 사용하였으며, 인식시는 인식대상이 되는 단어에 대하여 GA(Genetic Algorithm)을 통하여 각 음소를 분리하고, 음소단위로 학습된 HMM 파라미터를 적용하여 인식함으로써 각각의 음소별 가능하도록 하는 방법을 제안하였다.

  • PDF

A Comparative Study on the phoneme recognition rate with regard to HMM training algorithms (HMM 훈련 알고리즘에 따른 음소인식률 비교 연구)

  • 구명완
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.298-301
    • /
    • 1998
  • HMM 훈련 방법에 따른 음소인식률의 변화에 대하여 기술한다. 음성모델은 이산 확률 밀도 혹은 연속 확률 밀도를 갖는 HMM을 사용하였으며, 훈련 알고리즘으로서는 forward-backward 와 segmental K-means 알고리즘을 사용하였다. 연속 확률 밀도는 N개의 mixture로 구성되어 있는데 1개의 mixture로 확장할 경우에서는 이진 트리 방식과 one-by-one 방식을 사용하였다. 여러 가지의 조합을 이용하여 음소인식 실험을 수행한 결과 연속 확률 분포를 사용하고 one-by-one 방식을 사용한 forward-backward 알고리즘이 가장 우수한 결과를 나타내었다.

  • PDF