• 제목/요약/키워드: speech error

검색결과 581건 처리시간 0.027초

잡음음성 음향모델 적응에 기반한 잡음에 강인한 음성인식 (Noise Robust Speech Recognition Based on Noisy Speech Acoustic Model Adaptation)

  • 정용주
    • 말소리와 음성과학
    • /
    • 제6권2호
    • /
    • pp.29-34
    • /
    • 2014
  • In the Vector Taylor Series (VTS)-based noisy speech recognition methods, Hidden Markov Models (HMM) are usually trained with clean speech. However, better performance is expected by training the HMM with noisy speech. In a previous study, we could find that Minimum Mean Square Error (MMSE) estimation of the training noisy speech in the log-spectrum domain produce improved recognition results, but since the proposed algorithm was done in the log-spectrum domain, it could not be used for the HMM adaptation. In this paper, we modify the previous algorithm to derive a novel mathematical relation between test and training noisy speech in the cepstrum domain and the mean and covariance of the Multi-condition TRaining (MTR) trained noisy speech HMM are adapted. In the noisy speech recognition experiments on the Aurora 2 database, the proposed method produced 10.6% of relative improvement in Word Error Rates (WERs) over the MTR method while the previous MMSE estimation of the training noisy speech produced 4.3% of relative improvement, which shows the superiority of the proposed method.

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.

디지털 이동통신 채널상의 14Kbps SBC-APCM(AQB)를 위한 비트선택적 에러정정부호 (Bit-selective Forward Error Correction for 14Kbps SBC-APCM (AQB) over Digital Mobile Communication Channels)

  • 김민구;이재홍
    • 대한전자공학회논문지
    • /
    • 제27권6호
    • /
    • pp.821-828
    • /
    • 1990
  • A forward error correction (FEC) technique is presented for speech data in 16 Kbps digital mobile communications. 14Kbps SBC-APCM(AQB) and QPSK are used as speech coding and modulation techniques, respectively. Because each bit in a speech data block had different importance, applying FEC to speech data bit-selectively in more effective than applying FEC to all speech data equally. To select bits in a speech data block to be protected by FEC the bit error sensitivity of each bit is computed. For a few BCH and Reed-Solomon codes used as bit-selective FEC the performance of the coding technique is computed.

  • PDF

감정에 강인한 음성 인식을 위한 음성 파라메터 (Speech Parameters for the Robust Emotional Speech Recognition)

  • 김원구
    • 제어로봇시스템학회논문지
    • /
    • 제16권12호
    • /
    • pp.1137-1142
    • /
    • 2010
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient and frequency warped mel-cepstral coefficient were used as feature parameters. And CMS (Cepstral Mean Subtraction) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using vocal tract length normalized mel-cepstral coefficient, its derivatives and CMS as a signal bias removal showed the best performance of 0.78% word error rate. This corresponds to about a 50% word error reduction as compare to the performance of baseline system using mel-cepstral coefficient, its derivatives and CMS.

유색잡음에 대한 적응잡음제거기의 성능향성 (Performance improvement of adaptivenoise canceller with the colored noise)

  • 박장식;조성환;손경식
    • 한국통신학회논문지
    • /
    • 제22권10호
    • /
    • pp.2339-2347
    • /
    • 1997
  • The performance of the adaptive noise canceller using LMS algorithm is degraded by the gradient noise due to target speech signals. An adaptive noise canceller with speech detector was proposed to reduce this performande degradation. The speech detector utilized the adaptive prediction-error filter adapted by the NLMS algorithm. This paper discusses to enhance the performance of the adaptive noise canceller forthecorlored noise. The affine projection algorithm, which is known as faster than NLMS algorithm for correlated signals, is used to adapt the adaptive filter and the adaptive prediction error filter. When the voice signals are detected by the speech detector, coefficients of adaptive filter are adapted by the sign-error afine projection algorithm which is modified to reduce the miaslignment of adaptive filter coefficients. Otherwirse, they are adapted by affine projection algorithm. To obtain better performance, the proper step size of sign-error affine projection algorithm is discussed. As resutls of computer simulation, it is shown that the performance of the proposed ANC is better than that of conventional one.

  • PDF

VQ/HMM에 의한 화자독립 음성인식에서 다수 후보자를 인식 대상으로 제출하는 방법에 관한 연구 (A Study on the Submission of Multiple Candidates for Decision in Speaker-Independent Speech Recognition by VQ/HMM)

  • 이창영;남호수
    • 음성과학
    • /
    • 제12권3호
    • /
    • pp.115-124
    • /
    • 2005
  • We investigated on the submission of multiple candidates in speaker-independent speech recognition by VQ/HMM. Submission of fixed number of multiple candidates has first been examined. As the number of candidates increases by two, three, and four, the recognition error rates were found to decrease by 41%, 58%, and 65%, respectively compared to that of a single candidate. We tried another approach that the candidates within a range of Viterbi scores are submitted. The number of candidates showed geometric increase as the admitted range becomes large. For a practical application, a combination of the above two methods was also studied. We chose the candidates within some range of Viterbi scores and limited the maximum number of candidates submitted to five. Experimental results showed that recognition error rates of less than 10% could be achieved with average number of candidates of 3.2 by this method.

  • PDF

Semantic-Oriented Error Correction for Voice-Activated Information Retrieval System

  • Yoon, Yong-Wook;Kim, Byeong-Chang;Lee, Gary-Geunbae
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.115-130
    • /
    • 2002
  • Voice input is often required in many new application environments, but the low rate of speech recognition makes it difficult to extend its application. Previous approaches were to raise the accuracy of the recognition by post-processing of the recognition results, which were all lexical-oriented. We suggest a new semantic-oriented approach in speech recognition error correction. Through experiments using a speech-driven in-vehicle telematics information application, we show the excellent performance of our approach and some advantages it has as a semantic-oriented approach over a pure lexical-oriented approach.

  • PDF

음성 인식용 데이터베이스 검증시스템을 위한 새로운 음성 인식 성능 지표 (A New Speech Quality Measure for Speech Database Verification System)

  • 지승은;김우일
    • 한국정보통신학회논문지
    • /
    • 제20권3호
    • /
    • pp.464-470
    • /
    • 2016
  • 본 논문에서는 음성의 특성 지표를 이용한 음성 인식용 데이터베이스 검증 시스템의 개발 내용을 소개하고 이 시스템의 핵심 기술인 음성 특성 지표 추출 알고리즘을 설명한다. 선행 연구에서는 본 시스템에 필요한 효과적인 음성 인식 성능 지표를 생성하기 위해 대표적인 음성 인식 성능 지표인 단어 오인식률(Word Error Rate, WER)과 상관도가 높은 여러 가지 음성 특성 지표들을 조합하여 새로운 성능 지표를 생성하였다. 생성된 음성 인식 성능 지표는 다양한 잡음 환경에서 각 음성 특성 지표를 단독으로 사용할 때보다 단어 오인식률과 높은 상관도를 나타내어 음성 인식 성능을 예측하는데 효과적임을 입증 하였다. 본 실험에서는 선행 연구에서 조합에 사용한 이차적인 음성 인식기에서 추출된 음향 모델 확률 값을 GMM(Gaussian Mixture Model) 음향 모델 확률 값으로 대체해 조합함으로써 시스템 구축 시 다른 음성 인식기에 대한 의존성을 감소시킨다.

일반 및 말소리장애 아동의 탈비음화 오류패턴 (Denasalization error pattern for typically developing and SSD children)

  • 김민정
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.3-8
    • /
    • 2015
  • Denasalization that nasals are replaced by stops is an unusual error pattern related to manner of articulation. The purpose of this study is to investigate the prevalence of denasalization and to scrutinize the nasal production according to phonological context for typically developing children and children with speech sound disorders(SSD). 220 typically developing children and 48 SSD children from 2~6 years of age were tested with a formal word test, and those who demonstrate denasalization were selected. In addition, the nasal production of SSD children with denasalization were analyzed for the correctness and the error types using the formal word test and spontaneous conversation. The results were as follows: (1) Denasalization was shown in below 10% of 2-3 years of age with typically developing children and in above 20% of 2-5 years of age with SSD. (2) The SSD children who demonstrate denasalization were categorized into 4 types according to the error context of nasals; nasal errors with all word positions, nasal errors with word-final and word-medial positions, nasal errors with word-medial position preceding vowels, and nasal errors with word-medial position preceding obstruents. These results indicate that denasalization is a clinically important error pattern, and word-medial position preceding obstruents is an essential context for denasalization in terms of Korean phonotactics.

기능적 조음장애아동과 일반아동의 어중자음 연쇄조건에서 나타나는 어중종성 오류 특성 비교 (Comparison of error characteristics of final consonant at word-medial position between children with functional articulation disorder and normal children)

  • 이란;이은주
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.19-28
    • /
    • 2015
  • This study investigated final consonant error characteristics at word-medial position in children with functional articulation disorder. Data was collected from 11 children with functional articulation and 11 normal children, ages 4 to 5. The speech samples were collected from a naming test. Seventy-five words with every possible bi-consonants matrix at the word-medial position were used. The results of this study were as follows : First, percentage of correct word-medial final consonants of functional articulation disorder was lower than normal children. Second, there were significant differences between two groups in omission, substitution and assimilation error. Children with functional articulation disorder showed a high frequency of omission and regressive assimilation error, especially alveolarization in regressive assimilation error most. However, normal children showed a high frequency of regressive assimilation error, especially bilabialization in regressive assimilation error most. Finally, the results of error analysis according to articulation manner, articulation place and phonation type of consonants of initial consonant at word-medial, both functional articulation disorder and normal children showed a high error rate in stop sound-stop sound condition. The error rate of final consonant at word-medial position was high when initial consonant at word-medial position was alveolar sound and alveopalatal sound. Futhermore, when initial sounds were fortis and aspirated sounds, more errors occurred than linis sound was initial sound. The results of this study provided practical error characteristics of final consonant at word-medial position in children with speech sound disorder.