• 제목/요약/키워드: speech error

Search Result 583, Processing Time 0.03 seconds

Training Method and Speaker Verification Measures for Recurrent Neural Network based Speaker Verification System

  • 김태형
    • 한국통신학회논문지
    • /
    • 제34권3C호
    • /
    • pp.257-267
    • /
    • 2009
  • This paper presents a training method for neural networks and the employment of MSE (mean scare error) values as the basis of a decision regarding the identity claim of a speaker in a recurrent neural networks based speaker verification system. Recurrent neural networks (RNNs) are employed to capture temporally dynamic characteristics of speech signal. In the process of supervised learning for RNNs, target outputs are automatically generated and the generated target outputs are made to represent the temporal variation of input speech sounds. To increase the capability of discriminating between the true speaker and an impostor, a discriminative training method for RNNs is presented. This paper shows the use and the effectiveness of the MSE value, which is obtained from the Euclidean distance between the target outputs and the outputs of networks for test speech sounds of a speaker, as the basis of speaker verification. In terms of equal error rates, results of experiments, which have been performed using the Korean speech database, show that the proposed speaker verification system exhibits better performance than a conventional hidden Markov model based speaker verification system.

연결숫자음 전화음성 인식에서의 오인식 유형 분석 (Analysis of Error Patterns in Korean Connected Digit Telephone Speech Recognition)

  • 김민성;정성윤;손종목;배건성;김상훈
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.115-118
    • /
    • 2003
  • Channel distortion and coarticulation effect in the connected digit telephone speech make it difficult to recognize, and degrade recognition performance in the telephone environment. In this paper, as a basic research to improve the recognition performance of Korean connected digit telephone, error patterns are investigated and analyzed. Telephone digit speech database released by SITEC with HTK system is used for recognition experiments. Both DWFBA and MRTCN methods are used for feature extraction and channel compensation, respectively. Experimental results are discussed with our findings.

  • PDF

고차통계 정규화를 이용한 강인한 음성인식 (Robust Speech Recognition Using Real-Time Higher Order Statistics Normalization)

  • 정주현;송화전;김형순
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.63-72
    • /
    • 2005
  • The performance of speech recognition system is degraded by the mismatch between training and test environments. Many studies have been presented to compensate for noise components in the cepstral domain. Recently, higher order cepstral moment normalization method has been introduced to improve recognition accuracy. In this paper, we present real-time high order moment normalization method with post-processing smoothing filter to reduce the parameter estimation error in higher order moment computation. In experiments using Aurora2 database, we obtained error rate reduction of 44.7% with proposed algorithm in comparison with baseline system.

  • PDF

FIR 필터링과 스펙트럼 기울이기가 MFCC를 사용하는 음성인식에 미치는 효과 (The Effect of FIR Filtering and Spectral Tilt on Speech Recognition with MFCC)

  • 이창영
    • 한국전자통신학회논문지
    • /
    • 제5권4호
    • /
    • pp.363-371
    • /
    • 2010
  • 특징벡터의 분류를 개선시켜 화자독립 음성인식의 오류율을 줄이려는 노력의 일환으로서, 우리는 MFCC의 추출에 있어서 푸리에 스펙트럼을 기울이는 방법이 미치는 효과를 연구한다. 음성신호에 FIR 필터링을 적용하는 효과의 조사도 병행된다. 제안된 방법은 두 가지 독립적인 방법에 의해 평가된다. 즉, 피셔의 차별함수에 의한 방법과 은닉 마코브 모델 및 퍼지 벡터양자화를 사용한 음성인식 오류율 조사 방법이다. 실험 결과, 적절한 파라미터의 선택에 의해 기존의 방법에 비해 10% 정도 낮은 인식 오류율이 얻어짐을 확인하였다.

잡음에 강인한 음성인식을 위한 스펙트럼 보상 방법 (A Spectral Compensation Method for Noise Robust Speech Recognition)

  • 조정호
    • 전자공학회논문지 IE
    • /
    • 제49권2호
    • /
    • pp.9-17
    • /
    • 2012
  • 음성 인식 시스템의 용용에서 실제 문제점의 하나는 음성신호의 왜곡에 의한 인식성능의 저하이다. 음성신호의 왜곡에 가장 중요한 원인은 부가적인 잡음이다. 이 논문은 잡음에 강인한 음성인식을 위하여, 스펙트럼 피크 향상 기법과 효과적인 잡음 차감 기법에 기초한 스펙트럼 보상 방법을 기술한다. 제안한 방법은 음성 스펙트럼의 포먼트 구조를 향상시키고 스펙트럼 기울기를 보상하면서도 광 대역폭 스펙트럼 요소는 그대로 유지한다. 백색 가우스 잡음, 자동차 잡음, 음성 잡음 또는 지하철 잡음에 의해 왜곡된 음성을 이용한 인식실험을 수행한 결과, 새로운 방법은 스펙트럼 보상을 하지 않은 경우에 비해, 높은 SNR(Signal to Noise Ratio) 환경에서는 평균 오인식율을 약간 줄였으며, 낮은 SNR(10 dB) 환경에서는 평균 오인식율을 1/2로 크게 줄였다.

음성장애의 병인 집단 간 추정 발화 기본주파수 절대 오차 비교 (A comparison of the absolute error of estimated speaking fundamental frequency (AEF0) among etiological groups of voice disorders)

  • 이승진;임재열;김재옥
    • 말소리와 음성과학
    • /
    • 제15권4호
    • /
    • pp.53-60
    • /
    • 2023
  • 본 연구에서는 음성장애 환자에서 음성 범위 프로파일(voice range profile, VRP)과 말 범위 프로파일(speech range profile, SRP)을 이용한 추정 발화 기본주파수 절대 오차(absolute error of estimated speaking fundamental frequency, AEF0)를 음성장애의 병인 집단 간에 비교하여 차이를 확인하고,각 병인 집단 별로 AEF0와 관련된 변수들 간의 상관관계를 살펴보고자 하였다. 연구대상은 음성장애로 진단된 기능적(functional, FUNC), 기질적(organic, ORGAN), 신경학적(neurogenic, NEUR) 음성장애 환자군과 정상군(normal control, NC) 각 30명(남 15명, 여 15명)으로 총 120명이었다. 각 대상자로 하여금 음성, 말 범위 프로파일 과제를 수행하도록 하고 전기성문파형검사(electroglottography, EGG)를 통해 발화 기본주파수를 측정하였다. 병인 집단 간 AEF0의 비교 결과, Grade와 Severity는 병인 집단 간 차이가 없었던 반면, AEF0VRP와 AEF0SUM에서 병인 집단 간 차이가 있어 AEF0VRP는 ORGAN이 FUNC와 NC보다 높았으며, AEF0SUM은 ORGAN이 NC보다 높았다. 또한 FUNC와 NEUR에서는 AEF0가 Grade와 양의 상관관계를 보인 반면, ORGAN은 CQ(closed quotient)와 양의 상관관계가 있었다. 따라서 병인 집단에 따라 AEF0의 적용과 관련 음성 변수를 살펴보는 데 주의를 기울여야 할 것으로 보이며, 본 연구는 이러한 임상적 판단에 대한 기초 자료를 마련하는 데 일조한 것으로 여겨진다.

A Comparison of Korean EFL Learners' Oral and Written Productions

  • Lee, Eun-Ha
    • 영어어문교육
    • /
    • 제12권2호
    • /
    • pp.61-85
    • /
    • 2006
  • The purpose of the present study is to compare Korean EFL learners' speech corpus (i.e. oral productions) with their composition corpus (i.e. written productions). Four college students participated in the study. The composition corpus was collected through a writing assignment, and the speech corpus was gathered by audio-taping their oral presentations. The results of the data analysis indicate that (i) As for error frequency, young adult low-intermediate Korean EFL learners showed high frequency in determiners (mostly, indefinite articles), vocabulary (mostly, semantic errors), and prepositions. The frequency order did not show much difference between the speech corpus and the composition corpus; and (ii) When comparing the oral productions with the written productions, there were not many differences between them in terms of the contents, a style (i.e., colloquial vs. literary), vocabulary selection, and error types and frequency. Therefore, it is assumed that the proficiency in oral presentation of EFL learners at this learning stage heavily depends on how much/how well they are able to write. In other words, EFL learners' writing and speaking skills are closely co-related. It implies that the teacher does not need to separate teaching how to speak from teaching how to write. The teacher may use the same methods or strategies to help the learners improve their English speaking and writing skills. Furthermore, it will be more effective to teach writing before speaking since they have more opportunities to write than speak in the EFL contexts.

  • PDF

SMV코덱의 음성/음악 분류 성능 향상을 위한 최적화된 가중치를 적용한 입력벡터 기반의 SVM 구현 (Analysis and Implementation of Speech/Music Classification for 3GPP2 SMV Codec Employing SVM Based on Discriminative Weight Training)

  • 김상균;장준혁;조기호;김남수
    • 한국음향학회지
    • /
    • 제28권5호
    • /
    • pp.471-476
    • /
    • 2009
  • 본 논문에서는 변별적 가중치 학습 (discriminative weight training) 기반의 최적화된 가중치를 가지는 입력벡터를 구성하여 support vector machine (SVM)을 이용한 기존의 3GPP2 selectable mode vocoder (SMV)코덱의 음성/음악 분류 성능을 향상 시키는 방법을 제안한다. 구체적으로, 최소 분류 오차 minimum classification error (MCE) 방법을 도입하여, 최적화된 가중치를 각각의 특징벡터별로 부가한 SVM을 적용하여 기존의 가중치를 고려하지 않은 SVM 기반의 알고리즘과 비교하였으며, 우수한 음성/음악 분류 성능을 보였다.

네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계 (A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments)

  • 이길호;윤재삼;오유리;김홍국
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

Annotation of a Non-native English Speech Database by Korean Speakers

  • Kim, Jong-Mi
    • 음성과학
    • /
    • 제9권1호
    • /
    • pp.111-135
    • /
    • 2002
  • An annotation model of a non-native speech database has been devised, wherein English is the target language and Korean is the native language. The proposed annotation model features overt transcription of predictable linguistic information in native speech by the dictionary entry and several predefined types of error specification found in native language transfer. The proposed model is, in that sense, different from other previously explored annotation models in the literature, most of which are based on native speech. The validity of the newly proposed model is revealed in its consistent annotation of 1) salient linguistic features of English, 2) contrastive linguistic features of English and Korean, 3) actual errors reported in the literature, and 4) the newly collected data in this study. The annotation method in this model adopts the widely accepted conventions, Speech Assessment Methods Phonetic Alphabet (SAMPA) and the TOnes and Break Indices (ToBI). In the proposed annotation model, SAMPA is exclusively employed for segmental transcription and ToBI for prosodic transcription. The annotation of non-native speech is used to assess speaking ability for English as Foreign Language (EFL) learners.

  • PDF