• Title/Summary/Keyword: robust speech recognition

Search Result 224, Processing Time 0.028 seconds

Feature Extraction through the post processing of WFBA based on MMSE-STSA for Robust Speech Recognition (강인한 음성인식을 위한 MMSE-STSA기반 후처리 가중필터뱅크분석을 통한 특징추출)

  • Jung Sungyun;Bae Keunsung
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.39-42
    • /
    • 2004
  • 본 논문에서는, 잡음음성에 강인한 음성인식을 위한 특징추출 방법을 제시한다. 제시한 방법은 2 단계 잡음제거 과정으로 구성되어 있다. 첫번째 단계는 MMSE-STSA 음성개선기법을 통해 잡음음성신호를 개선시키는 과정이고, 두 번째 단계는, MMSE-STSA 의 개선된 음성에 후처리 가중필터뱅크분석을 통해 잔여잡음의 영향을 감소시키는 과정이다. 제안한 방법의 성능평가를 위해, AURORA2의 잡음음성 DB 중 테스트 집합 A 에 대해 인식실험을 수행하고, 결과를 기존 방법들과 비교, 검토한다.

  • PDF

Distance Measures Based Upon Adaptive Filtering For Robust Speech Recognition In Noise (잡음 환경하에서 음성 인식을 위한 적응필터링 거리 척도에 관한 연구)

  • 정원국;은종관
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.1E
    • /
    • pp.15-22
    • /
    • 1992
  • 잡음이 있는 환경하에서는 음성 인식의 성능이 현저하게 떨어지게 된다. 본 논문에서는 이렇나 잡음의 영향에 강한 거리척도를 제안하고자 한다. 우리는 잡음이 더해진 음성신호의 특징벡터를 깨끗한 음성신호의 특징벡터가 FIR 시스템을 거쳐 변형된 것이라고 가정한다. 여기서 FIR 시스템은 잡음의 영 향을 모델링한 것이라고 할 수 있다. 미지의 FIR 시스템 계수잡음의 영향을 모델링한 것이라고 할 수 있다. 미지의 FIR 시스템계수들은 RLS 적응 알고리즘을 이용하여 구한다. 제안된 거리척도는 적응 여파 기의 예측 오차에 관한 식으로 표시되어진다. 여러 가지 적응 여파기의 구조중 단일 채널 일차 FIR 구 조가 가장 좋은 음성 인식 성능을 보이며, 이 경우 효과적인 거리척도 알고리즘을 구할 수 있다. 여러 가지 신호대 잡음비에 관하여 화자독립 격리단어 인식 실험을 DTW 알고리즘을 이용하여 수행하여 본 결과 제안된 거리척도가 거의 모든 신호대 잡음비에 대하여 우수한 성능을 보였다.

  • PDF

Robust Speaker Identification Exploiting the Advantages of PCA and LDA (주성분분석과 선형판별분석의 장점을 이용한 강인한 화자식별)

  • Kim, Min-Seok;Yu, Ha-Jin;Kim, Sung-Joo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.319-322
    • /
    • 2007
  • The goal of our research is to build a textindependent speaker identification system that can be used in mobile devices without any additional adaptation process. In this paper, we show that exploiting the advantages of both PCA(Principle Component Analysis) and LDA(Linear Discriminant Analysis) can increase the performance in the situation. The proposed method reduced the relative recognition error by 13.5%

  • PDF

Robust estimation of HMM parameters Based on the State-Dependent Source-Quantization for Speech Recognition (상태의존 소스 양자화에 기반한 음성인식을 위한 은닉 마르코프 모델 파라미터의 견고한 추정)

  • 최환진;박재득
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.1
    • /
    • pp.66-75
    • /
    • 1998
  • 최근 음성인식을 위한 대표적인 방법으로써 은닉 마르코프 모델이 사용되고 있으며, 이러한 방법은 음성의 특성을 잘 표현하도록 하는 음향적인 모델링 방법에 따라서 성능이 좌우된다. 본 논문에서는 상태에서의 출력확률은 견고히 추정하기 위한 방법으로 상태에서 의 출력활률을 소스들의 분포와 그들의 빈도로 가중한 출력분포로 표시하는 상태 의존 소스 양자화 모델링 방법을 제안한다. 이 방법은 한 상태 내에서 특징 파라미터들이 유사한 특성 을 가지며, 그들의 변이가 다른 상태에 있는 특징 파라미터들에 비해서 작다는 사실에 기반 한다. 실험결과에 의하면, 제안된 방법이 기존의 baseline시스템보다 단어 인식율의 경우는 2.7%, 문장 인식율의 경우 3.6%의 향상을 보였다. 이러한 결과로부터 제안된 SDSQ-DHMM이 인식율 향상면에서 유효하며, HMM에 있어서 상태별 출력확률의 견고한 추정을 위한 대안으로 사용될 수 있을 것으로 판단된다.

  • PDF

On-line HMM adaptation using fast covariance compensation for robust speech recognition (빠른 공분산 보상을 이용한 온라인 HMM 적응)

  • 정규준;조훈영;오영환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.34-36
    • /
    • 2001
  • 본 논문에서는 모델 기반의 잡음 보상 방법인 PMC (parallel model combination)를 온라인상에서 적용하는 방법에 관해 논한다. PMC는 파라미터 보상시 미리 계산된 잡음 모델을 필요로 하며 파라미터 보상에 많은 연산을 요구하므로 온라인으로 모델 파라미터를 보상하기가 어렵다. 본 논문에서는 이러한 문제를 해결하기 위해 기존에 제안된 온라인 모델 보상 방법을 살펴보고, 기존 방법에서 보상 시간 문제로 제외한 PMC의 공분산 보상을 비교적 적은 연산량으로 수행하여 인식성능을 더욱 향상시켰다. 고립 숫자음 인식시스템에 백색 잡음을 SNR 0, 5, 10 dB로 가산한 평가 자료로 실험한 결과, 제안한 방식은 PMC를 적용한 경우에 비해 모델 적응 시간은 적게 걸리면서도 기존의 온라인 모델 보상 방법에 비해 평균 10%의 인식률 향상을 보였다.

  • PDF

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

A Study on Spoken Digits Analysis and Recognition (숫자음 분석과 인식에 관한 연구)

  • 김득수;황철준
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.3
    • /
    • pp.107-114
    • /
    • 2001
  • This paper describes Connected Digit Recognition with Considering Acoustic Feature in Korea. The recognition rate of connected digit is usually lower than word recognition. Therefore, speech feature parameter and acoustic feature are employed to make robust model for digit, and we could confirm the effect of Considering. Acoustic Feature throughout the experience of recognition. We used KLE 4 connected digit as database and 19 continuous distributed HMM as PLUs(Phoneme Like Units) using phonetical rules. For recognition experience, we have tested two cases. The first case, we used usual method like using Mel-Cepstrum and Regressive Coefficient for constructing phoneme model. The second case, we used expanded feature parameter and acoustic feature for constructing phoneme model. In both case, we employed OPDP(One Pass Dynamic Programming) and FSA(Finite State Automata) for recognition tests. When appling FSN for recognition, we applied various acoustic features. As the result, we could get 55.4% recognition rate for Mel-Cepstrum, and 67.4% for Mel-Cepstrum and Regressive Coefficient. Also, we could get 74.3% recognition rate for expanded feature parameter, and 75.4% for applying acoustic feature. Since, the case of applying acoustic feature got better result than former method, we could make certain that suggested method is effective for connected digit recognition in korean.

  • PDF

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

Robust Blind Source Separation to Noisy Environment For Speech Recognition in Car (차량용 음성인식을 위한 주변잡음에 강건한 브라인드 음원분리)

  • Kim, Hyun-Tae;Park, Jang-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.89-95
    • /
    • 2006
  • The performance of blind source separation(BSS) using independent component analysis (ICA) declines significantly in a reverberant environment. A post-processing method proposed in this paper was designed to remove the residual component precisely. The proposed method used modified NLMS(normalized least mean square) filter in frequency domain, to estimate cross-talk path that causes residual cross-talk components. Residual cross-talk components in one channel is correspond to direct components in another channel. Therefore, we can estimate cross-talk path using another channel input signals from adaptive filter. Step size is normalized by input signal power in conventional NLMS filter, but it is normalized by sum of input signal power and error signal power in modified NLMS filter. By using this method, we can prevent misadjustment of filter weights. The estimated residual cross-talk components are subtracted by non-stationary spectral subtraction. The computer simulation results using speech signals show that the proposed method improves the noise reduction ratio(NRR) by approximately 3dB on conventional FDICA.

  • PDF