• 제목/요약/키워드: Speech Recognition Error

검색결과 282건 처리시간 0.023초

수정된 MAP 적응 기법을 이용한 음성 데이터 자동 군집화 (Automatic Clustering of Speech Data Using Modified MAP Adaptation Technique)

  • 반성민;강병옥;김형순
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.77-83
    • /
    • 2014
  • This paper proposes a speaker and environment clustering method in order to overcome the degradation of the speech recognition performance caused by various noise and speaker characteristics. In this paper, instead of using the distance between Gaussian mixture model (GMM) weight vectors as in the Google's approach, the distance between the adapted mean vectors based on the modified maximum a posteriori (MAP) adaptation is used as a distance measure for vector quantization (VQ) clustering. According to our experiments on the simulation data generated by adding noise to clean speech, the proposed clustering method yields error rate reduction of 10.6% compared with baseline speaker-independent (SI) model, which is slightly better performance than the Google's approach.

채널보상기법을 사용한 전화 음성 연속숫자음의 인식 성능향상 (Performance Improvement of Connected Digit Recognition with Channel Compensation Method for Telephone speech)

  • 김민성;정성윤;손종목;배건성
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.73-82
    • /
    • 2002
  • Channel distortion degrades the performance of speech recognizer in telephone environment. It mainly results from the bandwidth limitation and variation of transmission channel. Variation of channel characteristics is usually represented as baseline shift in the cepstrum domain. Thus undesirable effect of the channel variation can be removed by subtracting the mean from the cepstrum. In this paper, to improve the recognition performance of Korea connected digit telephone speech, channel compensation methods such as CMN (Cepstral Mean Normalization), RTCN (Real Time Cepatral Normalization), MCMN (Modified CMN) and MRTCN (Modified RTCN) are applied to the static MFCC. Both MCMN and MRTCN are obtained from the CMN and RTCN, respectively, using variance normalization in the cepstrum domain. Using HTK v3.1 system, recognition experiments are performed for Korean connected digit telephone speech database released by SITEC (Speech Information Technology & Industry Promotion Center). Experiments have shown that MRTCN gives the best result with recognition rate of 90.11% for connected digit. This corresponds to the performance improvement over MFCC alone by 1.72%, i.e, error reduction rate of 14.82%.

  • PDF

자바를 이용한 음성인식 시스템에 관한 연구 (Study of Speech Recognition System Using the Java)

  • 최광국;김철;최승호;김진영
    • 한국음향학회지
    • /
    • 제19권6호
    • /
    • pp.41-46
    • /
    • 2000
  • 본 논문에서는 자바를 사용하여 연속분포 HMM 알고리즘과 Browser-embedded 모델로 음성인식시스템을 구현하였다. 이 시스템은 웹상에서 음성분석, 처리, 인식과정을 실행할 수 있도록 설계되었으며, 클라이언트에서는 자바애플릿을 이용하여 음성의 끝점검출과 MFCC와 에너지 그리고 델타계수들을 추출하여 소켓을 통해 서버로 전송하고, 서버는 HMM 인식기와 학습DB를 이용하여 인식을 수행하고 인식된 결과는 클라이언트에 전송되어 문자로 출력되어진다. 또한 이 시스템은 플랫폼에 독립적인 시스템으로 네트웍상에서 구축되었기 때문에 높은 에러율을 갖고 있지만 멀티미디어 분야에 접목시켰다는 의의와 향후에 새로운 정보통신 서비스가 될 가능성이 있음을 알 수 있었다.

  • PDF

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

음성인식을 이용한 자동 호 분류 철도 예약 시스템 (A Train Ticket Reservation Aid System Using Automated Call Routing Technology Based on Speech Recognition)

  • 심유진;김재인;구명완
    • 대한음성학회지:말소리
    • /
    • 제52호
    • /
    • pp.161-169
    • /
    • 2004
  • This paper describes the automated call routing for train ticket reservation aid system based on speech recognition. We focus on the task of automatically routing telephone calls based on user's fluently spoken response instead of touch tone menus in an interactive voice response system. Vector-based call routing algorithm is investigated and mapping table for key term is suggested. Korail database collected by KT is used for call routing experiment. We evaluate call-classification experiments for transcribed text from Korail database. In case of small training data, an average call routing error reduction rate of 14% is observed when mapping table is used.

  • PDF

화자 식별에서의 배경화자데이터를 이용한 히스토그램 등화 기법 (Histogram Equalization Using Background Speakers' Utterances for Speaker Identification)

  • 김명재;양일호;소병민;김민석;유하진
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.79-86
    • /
    • 2012
  • In this paper, we propose a novel approach to improve histogram equalization for speaker identification. Our method collects all speech features of UBM training data to make a reference distribution. The ranks of the feature vectors are calculated in the sorted list of the collection of the UBM training data and the test data. We use the ranks to perform order-based histogram equalization. The proposed method improves the accuracy of the speaker recognition system with short utterances. We use four kinds of speech databases to evaluate the proposed speaker recognition system and compare the system with cepstral mean normalization (CMN), mean and variance normalization (MVN), and histogram equalization (HEQ). Our system reduced the relative error rate by 33.3% from the baseline system.

모바일 기기를 위한 음성인식의 사용자 적응형 후처리 (User Adaptive Post-Processing in Speech Recognition for Mobile Devices)

  • 김영진;김은주;김명원
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제13권5호
    • /
    • pp.338-342
    • /
    • 2007
  • 본 논문에서는 모바일 환경에서 고립단어 음성인식을 할 경우 화자종속 방법을 이용하여 성능을 높이는 사용자 적응형 후처리 방법을 제안한다. 이 방법은 인식기의 정확한 인식 결과를 위한 추가적인 처리들로 구성된다. 즉 인식기의 출력과 정확한 최종 결과들 간의 관계를 학습하여 이를 잘못된 인식기의 출력을 수정하는 데에 사용한다. 학습에는 패턴인식에 강인한 다층 퍼셉트론을 사용하며 학습 시간을 고려하여 모델을 세분화하고 동적으로 동작할 수 있도록 구현한다. 이 결과 인식기의 오류에 대해 41%를 수정하는 성과(오류 수정률: 41%)를 보였다.

음성인식을 이용한 고객센터 자동 호 분류 시스템 (Automated Call Routing Call Center System Based on Speech Recognition)

  • 심유진;김재인;구명완
    • 음성과학
    • /
    • 제12권2호
    • /
    • pp.183-191
    • /
    • 2005
  • This paper describes the automated call routing for call center system based on speech recognition. We focus on the task of automatically routing telephone calls based on a users fluently spoken response instead of touch tone menus in an interactive voice response system. Vector based call routing algorithm is investigated and normalization method suggested. Call center database which was collected by KT is used for call routing experiment. Experimental results evaluating call-classification from transcribed speech are reported for that database. In case of small training data, an average call routing error reduction rate of 9% is observed when normalization method is used.

  • PDF

자동차 소음 환경에서 음성 인식 (Speech Recognition in the Car Noise Environment)

  • 김완구;차일환;윤대희
    • 전자공학회논문지B
    • /
    • 제30B권2호
    • /
    • pp.51-58
    • /
    • 1993
  • This paper describes the development of a speaker-dependent isolated word recognizer as applied to voice dialing in a car noise environment. for this purpose, several methods to improve performance under such condition are evaluated using database collected in a small car moving at 100km/h The main features of the recognizer are as follow: The endpoint detection error can be reduced by using the magnitude of the signal which is inverse filtered by the AR model of the background noise, and it can be compensated by using variants of the DTW algorithm. To remove the noise, an autocorrelation subtraction method is used with the constraint that residual energy obtainable by linear predictive analysis should be positive. By using the noise rubust distance measure, distortion of the feature vector is minimized. The speech recognizer is implemented using the Motorola DSP56001(24-bit general purpose digital signal processor). The recognition database is composed of 50 Korean names spoken by 3 male speakers. The recognition error rate of the system is reduced to 4.3% using a single reference pattern for each word and 1.5% using 2 reference patterns for each word.

  • PDF

음성인식 기반 응급상황관제 (Emergency dispatching based on automatic speech recognition)

  • 이규환;정지오;신대진;정민화;강경희;장윤희;장경호
    • 말소리와 음성과학
    • /
    • 제8권2호
    • /
    • pp.31-39
    • /
    • 2016
  • In emergency dispatching at 119 Command & Dispatch Center, some inconsistencies between the 'standard emergency aid system' and 'dispatch protocol,' which are both mandatory to follow, cause inefficiency in the dispatcher's performance. If an emergency dispatch system uses automatic speech recognition (ASR) to process the dispatcher's protocol speech during the case registration, it instantly extracts and provides the required information specified in the 'standard emergency aid system,' making the rescue command more efficient. For this purpose, we have developed a Korean large vocabulary continuous speech recognition system for 400,000 words to be used for the emergency dispatch system. The 400,000 words include vocabulary from news, SNS, blogs and emergency rescue domains. Acoustic model is constructed by using 1,300 hours of telephone call (8 kHz) speech, whereas language model is constructed by using 13 GB text corpus. From the transcribed corpus of 6,600 real telephone calls, call logs with emergency rescue command class and identified major symptom are extracted in connection with the rescue activity log and National Emergency Department Information System (NEDIS). ASR is applied to emergency dispatcher's repetition utterances about the patient information. Based on the Levenshtein distance between the ASR result and the template information, the emergency patient information is extracted. Experimental results show that 9.15% Word Error Rate of the speech recognition performance and 95.8% of emergency response detection performance are obtained for the emergency dispatch system.