• Title/Summary/Keyword: 강인한 음성 인식

Search Result 197, Processing Time 0.022 seconds

Flexible Speaker Adaptation Reflecting the Quality of Adaptation Data (Adaptation Data의 Quality를 고려한 강인한 화자 적응)

  • Pyo Hyun-A;Kim Se-Hyun;Oh Yung-Hwan
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.37-40
    • /
    • 2002
  • 최근 음성 인식 시스템의 성능 향상을 위해 화자 적응(speaker adaptation)에 대한 연구가 활발히 진행되고 있다. HMM 기반 인식 시스템의 모델 파라미터를 수정하는 화자 적응의 경우, MAP 방법과 MLLR 방법에 대한 연구가 주류를 이루고 있다. 두 방법은 adaptation data의 양에 따라서 서로 다른 성능을 보인다. 본 논문에서는 adaptation data의 quality를 정의하고, 이를 기존 두 방법의 가중치로 이용하여 화자 적응을 수행하는 방법을 제안한다. 제안한 방법을 KAIST 통신연구실에서 구축한 한국어 도시이름 500단어 인식 시스템에 적용하여 성능을 개선하였다.

  • PDF

A Study on Voice Recognition Pattern matching level for Vehicle ECU control (자동차 ECU제어를 위한 음성인식 패턴매칭레벨에 관한 연구)

  • Ahn, Jong-Young;Kim, Young-Sub;Kim, Su-Hoon;Hur, Kang-In
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.75-80
    • /
    • 2010
  • Noise handing is very important in voice recognition of vehicle environment. that has been studying about to hardware and software approach. hardware method that is noise filter circuit design, basically using Low-pass filter. it was shown a good result. and the side of software that has been developing about to algorithm for Noise canceler, NN(neural network), etc. in this paper we have analysis about to classified parameter pattern matting level for voice recognition on car noise environment that use of DTW(Dynamic Time Warping) which is applicable time series pattern recognition algorithm.

Speaker Identification Using Higher-Order Statistics In Noisy Environment (고차 통계를 이용한 잡음 환경에서의 화자식별)

  • Shin, Tae-Young;Kim, Gi-Sung;Kwon, Young-Uk;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.25-35
    • /
    • 1997
  • Most of speech analysis methods developed up to date are based on second order statistics, and one of the biggest drawback of these methods is that they show dramatical performance degradation in noisy environments. On the contrary, the methods using higher order statistics(HOS), which has the property of suppressing Gaussian noise, enable robust feature extraction in noisy environments. In this paper we propose a text-independent speaker identification system using higher order statistics and compare its performance with that using the conventional second-order-statistics-based method in both white and colored noise environments. The proposed speaker identification system is based on the vector quantization approach, and employs HOS-based voiced/unvoiced detector in order to extract feature parameters for voiced speech only, which has non-Gaussian distribution and is known to contain most of speaker-specific characteristics. Experimental results using 50 speaker's database show that higher-order-statistics-based method gives a better identificaiton performance than the conventional second-order-statistics-based method in noisy environments.

  • PDF

A Study on Word Juncture Modeling for Continuous Speech Recognition of Korean Language (한국어 연속음성 인식을 위한 단어 결합 모델링에 관한 연구)

  • Choi, In-Jeong;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.24-31
    • /
    • 1994
  • In this paper, we study continuous speech recognition of Korean language using acoustic models of word juncture coarticulation. To alleviate the performance degradation due to coarticulation problems, we use context-dependent units that model inter-word transitions in addition to intra-word transitions. In all cases the initial phone of each word has to be specified for each possible final phone of the previous word similarly for the final phone of each word. To improve the robustness of the HMM parameters, the covariance matrix is smoothed. We also use position-dependent units to improve the discriminative power between units. Simulation results show that when the improved models of word juncture coarticulation are used. the recognition performance is considerably improved compared to the baseline system using only intra-word units.

  • PDF

Robust Feature Normalization Scheme Using Separated Eigenspace in Noisy Environments (분리된 고유공간을 이용한 잡음환경에 강인한 특징 정규화 기법)

  • Lee Yoonjae;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4
    • /
    • pp.210-216
    • /
    • 2005
  • We Propose a new feature normalization scheme based on eigenspace for achieving robust speech recognition. In general, mean and variance normalization (MVN) is Performed in cepstral domain. However, another MVN approach using eigenspace was recently introduced. in that the eigenspace normalization Procedure Performs normalization in a single eigenspace. This Procedure consists of linear PCA matrix feature transformation followed by mean and variance normalization of the transformed cepstral feature. In this method. 39 dimensional feature distribution is represented using only a single eigenspace. However it is observed to be insufficient to represent all data distribution using only a sin91e eigenvector. For more specific representation. we apply unique na independent eigenspaces to cepstra, delta and delta-delta cepstra respectively in this Paper. We also normalize training data in eigenspace and get the model from the normalized training data. Finally. a feature space rotation procedure is introduced to reduce the mismatch of training and test data distribution in noisy condition. As a result, we obtained a substantial recognition improvement over the basic eigenspace normalization.

Speech Parameters for the Robust Emotional Speech Recognition (감정에 강인한 음성 인식을 위한 음성 파라메터)

  • Kim, Weon-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1137-1142
    • /
    • 2010
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient and frequency warped mel-cepstral coefficient were used as feature parameters. And CMS (Cepstral Mean Subtraction) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using vocal tract length normalized mel-cepstral coefficient, its derivatives and CMS as a signal bias removal showed the best performance of 0.78% word error rate. This corresponds to about a 50% word error reduction as compare to the performance of baseline system using mel-cepstral coefficient, its derivatives and CMS.

A study on Speech Recognition Using Recurrent Neural Predictive HMM (회귀신경망 예측 HMM을 이용한 음성 인식에 관한 연구)

  • 박경훈;한학용;김수훈;허강인
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.153-156
    • /
    • 2000
  • 본문에서는 예측형 회귀신경망과 HMM의 하이브리드 네트워크인 회귀신경망 예측 HMM을 구성하였다. 회귀신경망 예측 HMM은 예측형 회귀신경망을 HMM의 각 상태마다 예측기로 정의하여 일정치인 평균벡터 대신에 과거의 특징벡터의 영향을 받아 동적으로 변화하는 신경망에 의한 예측치를 이용하므로 학습패턴 설정자체가 시변성을 반영하는 동적 네트워크의 특성을 가진다. 따라서 음성과 같은 시계열 패턴의 인식에 유리하다. 회귀신경망 예측 HMM은 예측형 회귀신경망의 구조에 따라 Elman망 예측 HMM과 Jordan망 예측 HMM으로 구분하였다. 실험에서는 회귀신경망 예측 HMM의 상태수를 4, 5, 6으로 증가시켜 각 상태 수별로 예측차수 및 중간층 유니트 수의 변화에 따른 인식성능을 조사하였다. 실험결과 평가용. 데이터에 대하여 Elman망예측 HMM은 상태수가 6이고, 예측차수가 3차, 중간층 유니트의 수가 15차원일 때, Jordan망 예측 HMM의 경우 상태수가 5이고, 예측차수가 3차, 중간층 유니트의 수가 10차원일 때 각각 99.5%로 우수한 결과를 얻었다.

  • PDF

Cepstral Feature Normalization Methods Using Pole Filtering and Scale Normalization for Robust Speech Recognition (강인한 음성인식을 위한 극점 필터링 및 스케일 정규화를 이용한 켑스트럼 특징 정규화 방식)

  • Choi, Bo Kyeong;Ban, Sung Min;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.316-320
    • /
    • 2015
  • In this paper, the pole filtering concept is applied to the Mel-frequency cepstral coefficient (MFCC) feature vectors in the conventional cepstral mean normalization (CMN) and cepstral mean and variance normalization (CMVN) frameworks. Additionally, performance of the cepstral mean and scale normalization (CMSN), which uses scale normalization instead of variance normalization, is evaluated in speech recognition experiments in noisy environments. Because CMN and CMVN are usually performed on a per-utterance basis, in case of short utterance, they have a problem that reliable estimation of the mean and variance is not guaranteed. However, by applying the pole filtering and scale normalization techniques to the feature normalization process, this problem can be relieved. Experimental results using Aurora 2 database (DB) show that feature normalization method combining the pole-filtering and scale normalization yields the best improvements.

Simultaneous Speaker and Environment Adaptation by Environment Clustering in Various Noise Environments (다양한 잡음 환경하에서 환경 군집화를 통한 화자 및 환경 동시 적응)

  • Kim, Young-Kuk;Song, Hwa-Jeon;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.566-571
    • /
    • 2009
  • This paper proposes noise-robust fast speaker adaptation method based on the eigenvoice framework in various noisy environments. The proposed method is focused on de-noising and environment clustering. Since the de-noised adaptation DB still has residual noise in itself, environment clustering divides the noisy adaptation data into similar environments by a clustering method using the cepstral mean of non-speech segments as a feature vector. Then each adaptation data in the same cluster is used to build an environment-clustered speaker adapted (SA) model. After selecting multiple environmentally clustered SA models which are similar to test environment, the speaker adaptation based on an appropriate linear combination of clustered SA models is conducted. According to our experiments, we observe that the proposed method provides error rate reduction of $40{\sim}59%$ over baseline with speaker independent model.

Mask Estimation Based on Band-Independent Bayesian Classifler for Missing-Feature Reconstruction (Missing-Feature 복구를 위한 대역 독립 방식의 베이시안 분류기 기반 마스크 예측 기법)

  • Kim Wooil;Stern Richard M.;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.78-87
    • /
    • 2006
  • In this paper. we propose an effective mask estimation scheme for missing-feature reconstruction in order to achieve robust speech recognition under unknown noise environments. In the previous work. colored noise is used for training the mask classifer, which is generated from the entire frequency Partitioned signals. However it gives a limited performance under the restricted number of training database. To reflect the spectral events of more various background noise and improve the performance simultaneously. a new Bayesian classifier for mask estimation is proposed, which works independent of other frequency bands. In the proposed method, we employ the colored noise which is obtained by combining colored noises generated from each frequency band in order to reflect more various noise environments and mitigate the 'sparse' database problem. Combined with the cluster-based missing-feature reconstruction. the performance of the proposed method is evaluated on a task of noisy speech recognition. The results show that the proposed method has improved performance compared to the Previous method under white noise. car noise and background music conditions.