• Title/Summary/Keyword: speech recognition rate improvement

Search Result 94, Processing Time 0.023 seconds

Study on the Improvement of Speech Recognizer by Using Time Scale Modification (시간축 변환을 이용한 음성 인식기의 성능 향상에 관한 연구)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.462-472
    • /
    • 2004
  • In this paper a method for compensating for thp performance degradation or automatic speech recognition (ASR) is proposed. which is mainly caused by speaking rate variation. Before the new method is proposed. quantitative analysis of the performance of an HMM-based ASR system according to speaking rate is first performed. From this analysis, significant performance degradation was often observed in the rapidly speaking speech signals. A quantitative measure is then introduced, which is able to represent speaking rate. Time scale modification (TSM) is employed to compensate the speaking rate difference between input speech signals and training speech signals. Finally, a method for compensating the performance degradation caused by speaking rate variation is proposed, in which TSM is selectively employed according to speaking rate. By the results from the ASR experiments devised for the 10-digits mobile phone number, it is confirmed that the error rate was reduced by 15.5% when the proposed method is applied to the high speaking rate speech signals.

Recognition Performance Improvement of Unsupervised Limabeam Algorithm using Post Filtering Technique

  • Nguyen, Dinh Cuong;Choi, Suk-Nam;Chung, Hyun-Yeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.4
    • /
    • pp.185-194
    • /
    • 2013
  • Abstract- In distant-talking environments, speech recognition performance degrades significantly due to noise and reverberation. Recent work of Michael L. Selzer shows that in microphone array speech recognition, the word error rate can be significantly reduced by adapting the beamformer weights to generate a sequence of features which maximizes the likelihood of the correct hypothesis. In this approach, called Likelihood Maximizing Beamforming algorithm (Limabeam), one of the method to implement this Limabeam is an UnSupervised Limabeam(USL) that can improve recognition performance in any situation of environment. From our investigation for this USL, we could see that because the performance of optimization depends strongly on the transcription output of the first recognition step, the output become unstable and this may lead lower performance. In order to improve recognition performance of USL, some post-filter techniques can be employed to obtain more correct transcription output of the first step. In this work, as a post-filtering technique for first recognition step of USL, we propose to add a Wiener-Filter combined with Feature Weighted Malahanobis Distance to improve recognition performance. We also suggest an alternative way to implement Limabeam algorithm for Hidden Markov Network (HM-Net) speech recognizer for efficient implementation. Speech recognition experiments performed in real distant-talking environment confirm the efficacy of Limabeam algorithm in HM-Net speech recognition system and also confirm the improved performance by the proposed method.

User Adaptive Post-Processing in Speech Recognition for Mobile Devices (모바일 기기를 위한 음성인식의 사용자 적응형 후처리)

  • Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.338-342
    • /
    • 2007
  • In this paper we propose a user adaptive post-processing method to improve the accuracy of speaker dependent, isolated word speech recognition, particularly for mobile devices. Our method considers the recognition result of the basic recognizer simply as a high-level speech feature and processes it further for correct recognition result. Our method learns correlation between the output of the basic recognizer and the correct final results and uses it to correct the erroneous output of the basic recognizer. A multi-layer perceptron model is built for each incorrectly recognized word with high frequency. As the result of experiments, we achieved a significant improvement of 41% in recognition accuracy (41% error correction rate).

An Implementation of the Vocabulary Independent Speech Recognition System Using VCCV Unit (VCCV단위를 이용한 어휘독립 음성인식 시스템의 구현)

  • 윤재선;홍광석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.160-166
    • /
    • 2002
  • In this paper, we implement a new vocabulary-independent speech recognition system that uses CV, VCCV, VC recognition unit. Since these recognition units are extracted in the trowel region of syllable, the segmentation is easy and robust. And in the case of not existing VCCV unit, the units are replaced by combining VC and CV semi-syllable model. Clustering of vowel group and applying combination rule to the substitution model in the case of not existing of VCCV model lead to 5.2% recognition performance improvement from 90.4% (Model A) to 95.6% (Model C) in the first candidate. The recognition results that is 98.8% recognition rate in the second candidate confirm the effectiveness of the proposed method.

A Study on the Improvement of Isolated Word Recognition for Telephone Speech (전화음성의 격리단어인식 개선에 관한 연구)

  • Do, Sam-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.66-76
    • /
    • 1990
  • In this work, the effect of noise and distortion of a telephone channel on the speech recognition is studied, and methods to improve the recognition rate are proposed. Computer simulation is done using the 100-word test data whichwere made by pronouncing ten times 100-phonetically balanced Korean isolated words in a speaker dependent mode. First, a spectral subtraction method is suggested to improve the noisy speech recognition. Then, the effect of bandwidth limiting and channel distortion is studied. It has been found that bandwidth limiting and amplitude distortion lower the recognition rate significantly, but phase distortion affects little. To reduce the channel effect, we modify the reference pattern according to some training data. When both channel noise and distortion exist, the recognition rate without the proposed method is merely 7.7~26.4%, but the recognition rate with the proposed method is drastically increased to 76.2~92.3%.

  • PDF

Improvement of Recognition Performance for Limabeam Algorithm by using MLLR Adaptation

  • Nguyen, Dinh Cuong;Choi, Suk-Nam;Chung, Hyun-Yeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.4
    • /
    • pp.219-225
    • /
    • 2013
  • This paper presents a method using Maximum-Likelihood Linear Regression (MLLR) adaptation to improve recognition performance of Limabeam algorithm for speech recognition using microphone array. From our investigation on Limabeam algorithm, we can see that the performance of filtering optimization depends strongly on the supporting optimal state sequence and this sequence is created by using Viterbi algorithm trained with HMM model. So we propose an approach using MLLR adaptation for the recognition of speech uttered in a new environment to obtain better optimal state sequence that support for the filtering parameters' optimal step. Experimental results show that the system embedded with MLLR adaptation presents the word correct recognition rate 2% higher than that of original calibrate Limabeam and also present 7% higher than that of Delay and Sum algorithm. The best recognition accuracy of 89.4% is obtained when we use 4 microphones with 5 utterances for adaptation.

CHMM Modeling using LMS Algorithm for Continuous Speech Recognition Improvement (연속 음성 인식 향상을 위해 LMS 알고리즘을 이용한 CHMM 모델링)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.11
    • /
    • pp.377-382
    • /
    • 2012
  • In this paper, the echo noise robust CHMM learning model using echo cancellation average estimator LMS algorithm is proposed. To be able to adapt to the changing echo noise. For improving the performance of a continuous speech recognition, CHMM models were constructed using echo noise cancellation average estimator LMS algorithm. As a results, SNR of speech obtained by removing Changing environment noise is improved as average 1.93dB, recognition rate improved as 2.1%.

A Study on Improving Speech Recognition Rate (H/W, S/W) of Speech Impairment by Neurological Injury (신경학적 손상에 의한 언어장애인 음성 인식률 개선(H/W, S/W)에 관한 연구)

  • Lee, Hyung-keun;Kim, Soon-hub;Yang, Ki-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1397-1406
    • /
    • 2019
  • In everyday mobile phone calls between the disabled and non-disabled people due to neurological impairment, the communication accuracy is often hindered by combining the accuracy of pronunciation due to the neurological impairment and the pronunciation features of the disabled. In order to improve this problem, the limiting method is MEMS (micro electro mechanical systems), which includes an induction line that artificially corrects difficult vocalization according to the oral characteristics of the language impaired by improving the word of out of vocabulary. mechanical System) Microphone device improvement. S/W improvement is decision tree with invert function, and improved matrix-vector rnn method is proposed considering continuous word characteristics. Considering the characteristics of H/W and S/W, a similar dictionary was created, contributing to the improvement of speech intelligibility for smooth communication.

Performance Improvement of Speech Recognition Using Context and Usage Pattern Information (문맥 및 사용 패턴 정보를 이용한 음성인식의 성능 개선)

  • Song, Won-Moon;Kim, Myung-Won
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.553-560
    • /
    • 2006
  • Speech recognition has recently been investigated to produce more reliable recognition results in a noisy environment, by integrating diverse sources of information into the result derivation-level or producing new results through post-processing the prior recognition results. In this paper we propose a method which uses the user's usage patterns and the context information in speech command recognition for personal mobile devices to improve the recognition accuracy in a noisy environment. Sequential usage (or speech) patterns prior to the current command spoken are used to adjust the base recognition results. For the context information, we use the relevance between the current function of the device in use and the spoken command. Our experiment results show that the proposed method achieves about 50% of error correction rate over the base recognition system. It demonstrates the feasibility of the proposed method.

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.