• Title/Summary/Keyword: Single channel speech enhancement

Search Result 19, Processing Time 0.02 seconds

Performance Analysis of a Class of Single Channel Speech Enhancement Algorithms for Automatic Speech Recognition (자동 음성 인식기를 위한 단채널 음질 향상 알고리즘의 성능 분석)

  • Song, Myung-Suk;Lee, Chang-Heon;Lee, Seok-Pil;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2E
    • /
    • pp.86-99
    • /
    • 2010
  • This paper analyzes the performance of various single channel speech enhancement algorithms when they are applied to automatic speech recognition (ASR) systems as a preprocessor. The functional modules of speech enhancement systems are first divided into four major modules such as a gain estimator, a noise power spectrum estimator, a priori signal to noise ratio (SNR) estimator, and a speech absence probability (SAP) estimator. We investigate the relationship between speech recognition accuracy and the roles of each module. Simulation results show that the Wiener filter outperforms other gain functions such as minimum mean square error-short time spectral amplitude (MMSE-STSA) and minimum mean square error-log spectral amplitude (MMSE-LSA) estimators when a perfect noise estimator is applied. When the performance of the noise estimator degrades, however, MMSE methods including the decision directed module to estimate a priori SNR and the SAP estimation module helps to improve the performance of the enhancement algorithm for speech recognition systems.

Single-Channel Non-Causal Speech Enhancement to Suppress Reverberation and Background Noise

  • Song, Myung-Suk;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.8
    • /
    • pp.487-506
    • /
    • 2012
  • This paper proposes a speech enhancement algorithm to improve the speech intelligibility by suppressing both reverberation and background noise. The algorithm adopts a non-causal single-channel minimum variance distortionless response (MVDR) filter to exploit an additional information that is included in the noisy-reverberant signals in subsequent frames. The noisy-reverberant signals are decomposed into the parts of the desired signal and the interference that is not correlated to the desired signal. Then, the filter equation is derived based on the MVDR criterion to minimize the residual interference without bringing speech distortion. The estimation of the correlation parameter, which plays an important role to determine the overall performance of the system, is mathematically derived based on the general statistical reverberation model. Furthermore, the practical implementation methods to estimate sub-parameters required to estimate the correlation parameter are developed. The efficiency of the proposed enhancement algorithm is verified by performance evaluation. From the results, the proposed algorithm achieves significant performance improvement in all studied conditions and shows the superiority especially for the severely noisy and strongly reverberant environment.

A single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for push-to-talk communication (Push-to-talk 통신을 위한 진폭 및 위상 복원 기반의 단일 채널 음성 향상 방식)

  • Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.64-69
    • /
    • 2017
  • In this paper, we propose a single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for PTT (Push-To-Talk) communication. The proposed method combines the spectral amplitude and phase enhancement to provide high-quality speech unlike other single-channel speech enhancement methods which only use spectral amplitudes. We carried out side-by-side comparison experiment in various non-stationary noise environments in order to evaluate the performance of the proposed method. The experimental results show that the proposed method provides high quality speech better than other methods under different noise conditions.

A Single Channel Speech Enhancement for Automatic Speech Recognition

  • Lee, Jinkyu;Seo, Hyunson;Kang, Hong-Goo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.85-88
    • /
    • 2011
  • This paper describes a single channel speech enhancement as the pre-processor of automatic speech recognition system. The improvements are based on using optimally modified log-spectra (OM-LSA) gain function with a non-causal a priori signal-to-noise ratio (SNR) estimation. Experimental results show that the proposed method gives better perceptual evaluation of speech quality score (PESQ) and lower log-spectral distance, and also better word accuracy. In the enhancement system, parameters was turned for automatic speech recognition.

  • PDF

On Effective Dual-Channel Noise Reduction for Speech Recognition in Car Environment

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.43-52
    • /
    • 2004
  • This paper concerns an effective dual-channel noise reduction method to increase the performance of speech recognition in a car environment. While various single channel methods have already been developed and dual-channel methods have been studied somewhat, their effectiveness in real environments, such as in cars, has not yet been formally proven in terms of achieving acceptable performance level. Our aim is to remedy the low performance of the single and dual-channel noise reduction methods. This paper proposes an effective dual-channel noise reduction method based on a high-pass filter and front-end processing of the eigendecomposition method. We experimented with a real multi-channel car database and compared the results with respect to the microphones arrangements. From the analysis and results, we show that the enhanced eigendecomposition method combined with high-pass filter indeed significantly improve the speech recognition performance under a dual-channel environment.

  • PDF

Implementation of the single channel adaptive noise canceller using TMS320C30 (TMS320C30을 이용한 단일채널 적응잡음제거기 구현)

  • Jung, Sung-Yun;Woo, Se-Jeong;Son, Chang-Hee;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.73-81
    • /
    • 2001
  • In this paper, we focus on the real time implementation of the single channel adaptive noise canceller(ANC) by using TMS320C30 EVM board. The implemented single channel adaptive noise canceller is based on a reference paper [1] in which it is simulated by using the recursive average magnitude difference function(AMDF) to get a properly delayed input speech on a sample basis as a reference signal and normalized least mean square(NLMS) algorithm. To certify results of the real time implementation, we measured the processing time of the ANC and enhancement ratio according to various signalto-noise ratios(SNRs). Experimental results demonstrate that the processing time of the speech signal of 32ms length with delay estimation of every 10 samples is about 26.3 ms, and almost the same performance as given in [1] is obtained with the implemented system.

  • PDF

Speech Enhancement Using Phase-Dependent A Priori SNR Estimator in Log-Mel Spectral Domain

  • Lee, Yun-Kyung;Park, Jeon Gue;Lee, Yun Keun;Kwon, Oh-Wook
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.721-729
    • /
    • 2014
  • We propose a novel phase-based method for single-channel speech enhancement to extract and enhance the desired signals in noisy environments by utilizing the phase information. In the method, a phase-dependent a priori signal-to-noise ratio (SNR) is estimated in the log-mel spectral domain to utilize both the magnitude and phase information of input speech signals. The phase-dependent estimator is incorporated into the conventional magnitude-based decision-directed approach that recursively computes the a priori SNR from noisy speech. Additionally, we reduce the performance degradation owing to the one-frame delay of the estimated phase-dependent a priori SNR by using a minimum mean square error (MMSE)-based and maximum a posteriori (MAP)-based estimator. In our speech enhancement experiments, the proposed phase-dependent a priori SNR estimator is shown to improve the output SNR by 2.6 dB for both the MMSE-based and MAP-based estimator cases as compared to a conventional magnitude-based estimator.

Nonlinear Speech Enhancement Method for Reducing the Amount of Speech Distortion According to Speech Statistics Model (음성 통계 모형에 따른 음성 왜곡량 감소를 위한 비선형 음성강조법)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.3
    • /
    • pp.465-470
    • /
    • 2021
  • A robust speech recognition technology is required that does not degrade the performance of speech recognition and the quality of the speech when speech recognition is performed in an actual environment of the speech mixed with noise. With the development of such speech recognition technology, it is necessary to develop an application that achieves stable and high speech recognition rate even in a noisy environment similar to the human speech spectrum. Therefore, this paper proposes a speech enhancement algorithm that processes a noise suppression based on the MMSA-STSA estimation algorithm, which is a short-time spectral amplitude method based on the error of the least mean square. This algorithm is an effective nonlinear speech enhancement algorithm based on a single channel input and has high noise suppression performance. Moreover this algorithm is a technique that reduces the amount of distortion of the speech based on the statistical model of the speech. In this experiment, in order to verify the effectiveness of the MMSA-STSA estimation algorithm, the effectiveness of the proposed algorithm is verified by comparing the input speech waveform and the output speech waveform.

Improved speech enhancement of multi-channel Wiener filter using adjustment of principal subspace vector (다채널 위너 필터의 주성분 부공간 벡터 보정을 통한 잡음 제거 성능 개선)

  • Kim, Gibak
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.490-496
    • /
    • 2020
  • We present a method to improve the performance of the multi-channel Wiener filter in noisy environment. To build subspace-based multi-channel Wiener filter, in the case of single target source, the target speech component can be effectively estimated in the principal subspace of speech correlation matrix. The speech correlation matrix can be estimated by subtracting noise correlation matrix from signal correlation matrix based on the assumption that the cross-correlation between speech and interfering noise is negligible compared with speech correlation. However, this assumption is not valid in the presence of strong interfering noise and significant error can be induced in the principal subspace accordingly. In this paper, we propose to adjust the principal subspace vector using speech presence probability and the steering vector for the desired speech source. The multi-channel speech presence probability is derived in the principal subspace and applied to adjust the principal subspace vector. Simulation results show that the proposed method improves the performance of multi-channel Wiener filter in noisy environment.

Multi-resolution DenseNet based acoustic models for reverberant speech recognition (잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델)

  • Park, Sunchan;Jeong, Yongwon;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.