• Title/Summary/Keyword: robust speech recognition

Search Result 225, Processing Time 0.021 seconds

Efficient Compensation of Spectral Tilt for Speech Recognition in Noisy Environment (잡음 환경에서 음성인식을 위한 스펙트럼 기울기의 효과적인 보상 방법)

  • Cho, Jungho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.199-206
    • /
    • 2017
  • Environmental noise can degrade the performance of speech recognition system. This paper presents a procedure for performing cepstrum based feature compensation to make recognition system robust to noise. The approach is based on direct compensation of spectral tilt to remove effects of additive noise. The noise compensation scheme operates in the cepstral domain by means of calculating spectral tilt of the log power spectrum. Spectral compensation is applied in combination with SNR-dependent cepstral mean compensation. Experimental results, in the presence of white Gaussian noise, subway noise and car noise, show that the proposed compensation method achieves substantial improvements in recognition accuracy at various SNR's.

Cepstrum PDF Normalization Method for Speech Recognition in Noise Environment (잡음환경에서의 음성인식을 위한 켑스트럼의 확률분포 정규화 기법)

  • Suk Yong Ho;Lee Hwang-Soo;Choi Seung Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4
    • /
    • pp.224-229
    • /
    • 2005
  • In this paper, we Propose a novel cepstrum normalization method which normalizes the probability density function (pdf) of cepstrum for robust speech recognition in additive noise environments. While the conventional methods normalize the first- and/or second-order statistics such as the mean and/or variance of the cepstrum. the proposed method fully normalizes the statistics of cepstrum by making the pdfs of clean and noisy cepstrum identical to each other For the target Pdf, the generalized Gaussian distribution is selected to consider various densities. In recognition phase, we devise a table lookup method to save computational costs. From the speaker-independent isolated-word recognition experiments, we show that the Proposed method gives improved Performance compared with that of the conventional methods, especially in heavy noise environments.

Noise-Robust Speech Detection Using The Coefficient of Variation of Spectrum (스펙트럼의 변동계수를 이용한 잡음에 강인한 음성 구간 검출)

  • Kim Youngmin;Hahn Minsoo
    • MALSORI
    • /
    • no.48
    • /
    • pp.107-116
    • /
    • 2003
  • This paper deals with a new parameter for voice detection which is used for many areas of speech engineering such as speech synthesis, speech recognition and speech coding. CV (Coefficient of Variation) of speech spectrum as well as other feature parameters is used for the detection of speech. CV is calculated only in the specific range of speech spectrum. Average magnitude and spectral magnitude are also employed to improve the performance of detector. From the experimental results the proposed voice detector outperformed the conventional energy-based detector in the sense of error measurements.

  • PDF

An Efficient Approach for Noise Robust Speech Recognition by Using the Deterministic Noise Model (결정적 잡음 모델을 이용한 효율적인 잡음음성 인식 접근 방법)

  • 정용주
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.559-565
    • /
    • 2002
  • In this paper, we proposed an efficient method that estimates the HMM (Hidden Marke Model) parameters of the noisy speech. In previous methods, noisy speech HMM parameters are usually obtained by analytical methods using the assumed noise statistics. However, as they assume some simplication in the methods, it is difficult to come closely to the real statistics for the noisy speech. Instead of using the simplication, we used some useful statistics from the clean speech HMMs and employed the deterministic noise model. We could find that the new scheme showed improved results with reduced computation cost.

Rejection Performance Analysis in Vocabulary Independent Speech Recognition Based on Normalized Confidence Measure (정규화신뢰도 기반 가변어휘 고립단어 인식기의 거절기능 성능 분석)

  • Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.96-100
    • /
    • 2006
  • Kim et al. Proposed Normalized Confidence Measure (NCM) [1-2] and it was successfully used for rejecting mis-recognized words in isolated word recognition. However their experiments were performed on the fixed word speech recognition. In this Paper we apply NCM to the domain of vocabulary independent speech recognition (VISP) and shows the rejection Performance of NCM in VISP. Specialty we Propose vector quantization (VQ) based method for overcoming the problem of unseen triphones. It is because NCM uses the statistics of triphone confidence in the case of triphone-based normalization. According to speech recognition experiments Phone-based normalization method shows better results than RLJC[3] and also triphone-based normalization approach. This results are different with those of Kim et al [1-2]. Concludingly the Phone-based normalization shows robust Performance in VISP domain.

A Korean speech recognition based on conformer (콘포머 기반 한국어 음성인식)

  • Koo, Myoung-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.488-495
    • /
    • 2021
  • We propose a speech recognition system based on conformer. Conformer is known to be convolution-augmented transformer, which combines transfer model for capturing global information with Convolution Neural Network (CNN) for exploiting local feature effectively. The baseline system is developed to be a transfer-based speech recognition using Long Short-Term Memory (LSTM)-based language model. The proposed system is a system which uses conformer instead of transformer with transformer-based language model. When Electronics and Telecommunications Research Institute (ETRI) speech corpus in AI-Hub is used for our evaluation, the proposed system yields 5.7 % of Character Error Rate (CER) while the baseline system results in 11.8 % of CER. Even though speech corpus is extended into other domain of AI-hub such as NHNdiguest speech corpus, the proposed system makes a robust performance for two domains. Throughout those experiments, we can prove a validation of the proposed system.

A Study on Robust Emotion Classification Structure Between Heterogeneous Speech Databases (이종 음성 DB 환경에 강인한 감성 분류 체계에 대한 연구)

  • Yoon, Won-Jung;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.477-482
    • /
    • 2009
  • The emotion recognition system in commercial environments such as call-center undergoes severe system performance degradation and instability due to the speech characteristic differences between the system training database and the input speech of unspecified customers. In order to alleviate these problems, this paper extends traditional method of emotion recognition of neutral/anger into two-step hierarchical structure by using emotional characteristic changes and differences of male and female. The experimental results indicate that the proposed method provides very stable and successful emotional classification performance about 25% over the traditional method of emotion recognition.

Robust Speech Recognition using Noise Compensation Method Based on Eigen - Environment (Eigen - Environment 잡음 보상 방법을 이용한 강인한 음성인식)

  • Song Hwa Jeon;Kim Hyung Soon
    • MALSORI
    • /
    • no.52
    • /
    • pp.145-160
    • /
    • 2004
  • In this paper, a new noise compensation method based on the eigenvoice framework in feature space is proposed to reduce the mismatch between training and testing environments. The difference between clean and noisy environments is represented by the linear combination of K eigenvectors that represent the variation among environments. In the proposed method, the performance improvement of speech recognition systems is largely affected by how to construct the noisy models and the bias vector set. In this paper, two methods, the one based on MAP adaptation method and the other using stereo DB, are proposed to construct the noisy models. In experiments using Aurora 2 DB, we obtained 44.86% relative improvement with eigen-environment method in comparison with baseline system. Especially, in clean condition training mode, our proposed method yielded 66.74% relative improvement, which is better performance than several methods previously proposed in Aurora project.

  • PDF

A Robust Speaker Identification Using Optimized Confidence and Modified HMM Decoder (최적화된 관측 신뢰도와 변형된 HMM 디코더를 이용한 잡음에 강인한 화자식별 시스템)

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-Yu
    • MALSORI
    • /
    • no.64
    • /
    • pp.121-135
    • /
    • 2007
  • Speech signal is distorted by channel characteristics or additive noise and then the performances of speaker or speech recognition are severely degraded. To cope with the noise problem, we propose a modified HMM decoder algorithm using SNR-based observation confidence, which was successfully applied for GMM in speaker identification task. The modification is done by weighting observation probabilities with reliability values obtained from SNR. Also, we apply PSO (particle swarm optimization) method to the confidence function for maximizing the speaker identification performance. To evaluate our proposed method, we used the ETRI database for speaker recognition. The experimental results showed that the performance was definitely enhanced with the modified HMM decoder algorithm.

  • PDF

Practical Considerations for Hardware Implementations of the Auditory Model and Evaluations in Real World Noisy Environments

  • Kim, Doh-Suk;Jeong, Jae-Hoon;Lee, Soo-Young;Kil, Rhee M.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1E
    • /
    • pp.15-23
    • /
    • 1997
  • Zero-Crossings with Peak Amplitudes(ZCPA) model motivated by human auditory periphery was proposed to extract reliable features speech signals even in noisy environments for robust speech recognition. In this paper, some practical considerations for digital hardware implementations of the ZCPA model are addressed and evaluated for recognition of speech corrupted by several real world noises as well as white Gaussian noise. Infinite impulse response(IIR) filters which constitute the cochliar filterbank of the ZCPA are replaced by hamming bandpass filters of which frequency responses are less similar to biological neural tuning curves. Experimental results demonstrate that the detailed frequency response of the cochlear filters are not critical to performance. Also, the sensitivity of the model output to the variations in microphone gain is investigated, and results in good reliability of the ZCPA model.

  • PDF