• 제목/요약/키워드: Speech Recognition Error

검색결과 280건 처리시간 0.025초

MFCC와 LPC 특징 추출 방법을 이용한 음성 인식 오류 보정 (Speech Recognition Error Compensation using MFCC and LPC Feature Extraction Method)

  • 오상엽
    • 디지털융복합연구
    • /
    • 제11권6호
    • /
    • pp.137-142
    • /
    • 2013
  • 음성 인식 시스템은 부정확한 음성 신호의 입력으로 특징을 추출하여 인식할 경우 오인식의 결과가 나타나거나 유사한 음소로 인식된다. 따라서 본 논문에서는 음소가 갖는 특징을 기반으로 음소 유사율과 신뢰도 측정을 이용한 음성 인식 오류 보정 방법을 제안하였다. 음소 유사율은 학습 모델의 음소에 MFCC와 LPC 특징 추출 방법을 이용하여 구하였으며 신뢰도로 측정하였다. 음소 유사율과 신뢰도를 측정하여 오인식되는 오류를 최소화하였으며 음성 인식 과정에서 오류로 판명된 음성에 대하여 오류 보정을 수행하였다. 본 논문에서 제안한 시스템을 적용한 결과 98.3%의 인식률과 95.5%의 오류 보정율을 나타내었다.

잡음음성 음향모델 적응에 기반한 잡음에 강인한 음성인식 (Noise Robust Speech Recognition Based on Noisy Speech Acoustic Model Adaptation)

  • 정용주
    • 말소리와 음성과학
    • /
    • 제6권2호
    • /
    • pp.29-34
    • /
    • 2014
  • In the Vector Taylor Series (VTS)-based noisy speech recognition methods, Hidden Markov Models (HMM) are usually trained with clean speech. However, better performance is expected by training the HMM with noisy speech. In a previous study, we could find that Minimum Mean Square Error (MMSE) estimation of the training noisy speech in the log-spectrum domain produce improved recognition results, but since the proposed algorithm was done in the log-spectrum domain, it could not be used for the HMM adaptation. In this paper, we modify the previous algorithm to derive a novel mathematical relation between test and training noisy speech in the cepstrum domain and the mean and covariance of the Multi-condition TRaining (MTR) trained noisy speech HMM are adapted. In the noisy speech recognition experiments on the Aurora 2 database, the proposed method produced 10.6% of relative improvement in Word Error Rates (WERs) over the MTR method while the previous MMSE estimation of the training noisy speech produced 4.3% of relative improvement, which shows the superiority of the proposed method.

VQ/HMM에 의한 화자독립 음성인식에서 다수 후보자를 인식 대상으로 제출하는 방법에 관한 연구 (A Study on the Submission of Multiple Candidates for Decision in Speaker-Independent Speech Recognition by VQ/HMM)

  • 이창영;남호수
    • 음성과학
    • /
    • 제12권3호
    • /
    • pp.115-124
    • /
    • 2005
  • We investigated on the submission of multiple candidates in speaker-independent speech recognition by VQ/HMM. Submission of fixed number of multiple candidates has first been examined. As the number of candidates increases by two, three, and four, the recognition error rates were found to decrease by 41%, 58%, and 65%, respectively compared to that of a single candidate. We tried another approach that the candidates within a range of Viterbi scores are submitted. The number of candidates showed geometric increase as the admitted range becomes large. For a practical application, a combination of the above two methods was also studied. We chose the candidates within some range of Viterbi scores and limited the maximum number of candidates submitted to five. Experimental results showed that recognition error rates of less than 10% could be achieved with average number of candidates of 3.2 by this method.

  • PDF

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권10호
    • /
    • pp.1-7
    • /
    • 2021
  • 현재 대부분의 음성인식 오류 교정에 관한 연구는 영어를 기준으로 연구되어 한국어 음성인식에 대한 연구는 미비한 실정이다. 하지만 영어 음성인식에 비해 한국어 음성인식은 한국어의 언어적인 특성으로 인해 된소리, 연음 등의 발음이 있어, 비교적 많은 오류를 보이므로 한국어 음성인식에 대한 연구가 필요하다. 또한, 기존의 한국어 음성인식 연구는 주로 편집 거리 알고리즘과 음절 복원 규칙을 사용하기 때문에, 된소리와 연음의 오류 유형을 교정하기 어렵다. 본 연구에서는 된소리, 연음 등 발음으로 인한 한국어 음성인식 오류를 교정하기 위하여 LSTM을 기반으로 한 인공 신경망 모델 Sequence-to-Sequence와 Bahdanau Attention을 결합하는 문맥 기반 음성인식 후처리 모델을 제안한다. 실험 결과, 해당 모델을 사용함으로써 음성인식 성능은 된소리의 경우 64%에서 77%, 연음의 경우 74%에서 90%, 평균 69%에서 84%로 인식률이 향상되었다. 이를 바탕으로 음성인식을 기반으로 한 실제 응용 프로그램에도 본 연구에서 제안한 모델을 적용할 수 있다고 사료된다.

감정에 강인한 음성 인식을 위한 음성 파라메터 (Speech Parameters for the Robust Emotional Speech Recognition)

  • 김원구
    • 제어로봇시스템학회논문지
    • /
    • 제16권12호
    • /
    • pp.1137-1142
    • /
    • 2010
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient and frequency warped mel-cepstral coefficient were used as feature parameters. And CMS (Cepstral Mean Subtraction) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using vocal tract length normalized mel-cepstral coefficient, its derivatives and CMS as a signal bias removal showed the best performance of 0.78% word error rate. This corresponds to about a 50% word error reduction as compare to the performance of baseline system using mel-cepstral coefficient, its derivatives and CMS.

A Study on the Optimal Mahalanobis Distance for Speech Recognition

  • Lee, Chang-Young
    • 음성과학
    • /
    • 제13권4호
    • /
    • pp.177-186
    • /
    • 2006
  • In an effort to enhance the quality of feature vector classification and thereby reduce the recognition error rate of the speaker-independent speech recognition, we employ the Mahalanobis distance in the calculation of the similarity measure between feature vectors. It is assumed that the metric matrix of the Mahalanobis distance be diagonal for the sake of cost reduction in memory and time of calculation. We propose that the diagonal elements be given in terms of the variations of the feature vector components. Geometrically, this prescription tends to redistribute the set of data in the shape of a hypersphere in the feature vector space. The idea is applied to the speech recognition by hidden Markov model with fuzzy vector quantization. The result shows that the recognition is improved by an appropriate choice of the relevant adjustable parameter. The Viterbi score difference of the two winners in the recognition test shows that the general behavior is in accord with that of the recognition error rate.

  • PDF

Semantic-Oriented Error Correction for Voice-Activated Information Retrieval System

  • Yoon, Yong-Wook;Kim, Byeong-Chang;Lee, Gary-Geunbae
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.115-130
    • /
    • 2002
  • Voice input is often required in many new application environments, but the low rate of speech recognition makes it difficult to extend its application. Previous approaches were to raise the accuracy of the recognition by post-processing of the recognition results, which were all lexical-oriented. We suggest a new semantic-oriented approach in speech recognition error correction. Through experiments using a speech-driven in-vehicle telematics information application, we show the excellent performance of our approach and some advantages it has as a semantic-oriented approach over a pure lexical-oriented approach.

  • PDF

한국어 연속 숫자음 전화 음성 인식에서의 오인식 유형 분석 (Analysis of Error Patterns in ]Korean Connected Digit Telephone Speech Recognition)

  • 김민성;정성윤;손종목;배건성;김상훈
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.77-86
    • /
    • 2003
  • Channel distortion and coarticulation effect in the Korean connected digit telephone speech make it difficult to achieve high performance of connected digit recognition in the telephone environment. In this paper, as a basic research to improve the recognition performance of Korean connected digit telephone speech, recognition error patterns are investigated and analyzed. Korean connected digit telephone speech database released by SiTEC and HTK system are used for recognition experiments. Both DWFBA and MRTCN methods are used for feature extraction and channel compensation, respectively. Experimental results are discussed with our findings.

  • PDF

KMSAV: Korean multi-speaker spontaneous audiovisual dataset

  • Kiyoung Park;Changhan Oh;Sunghee Dong
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.71-81
    • /
    • 2024
  • Recent advances in deep learning for speech and visual recognition have accelerated the development of multimodal speech recognition, yielding many innovative results. We introduce a Korean audiovisual speech recognition corpus. This dataset comprises approximately 150 h of manually transcribed and annotated audiovisual data supplemented with additional 2000 h of untranscribed videos collected from YouTube under the Creative Commons License. The dataset is intended to be freely accessible for unrestricted research purposes. Along with the corpus, we propose an open-source framework for automatic speech recognition (ASR) and audiovisual speech recognition (AVSR). We validate the effectiveness of the corpus with evaluations using state-of-the-art ASR and AVSR techniques, capitalizing on both pretrained models and fine-tuning processes. After fine-tuning, ASR and AVSR achieve character error rates of 11.1% and 18.9%, respectively. This error difference highlights the need for improvement in AVSR techniques. We expect that our corpus will be an instrumental resource to support improvements in AVSR.

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.