• 제목/요약/키워드: Single channel speech enhancement

검색결과 19건 처리시간 0.025초

자동 음성 인식기를 위한 단채널 음질 향상 알고리즘의 성능 분석 (Performance Analysis of a Class of Single Channel Speech Enhancement Algorithms for Automatic Speech Recognition)

  • 송명석;이창헌;이석필;강홍구
    • The Journal of the Acoustical Society of Korea
    • /
    • 제29권2E호
    • /
    • pp.86-99
    • /
    • 2010
  • This paper analyzes the performance of various single channel speech enhancement algorithms when they are applied to automatic speech recognition (ASR) systems as a preprocessor. The functional modules of speech enhancement systems are first divided into four major modules such as a gain estimator, a noise power spectrum estimator, a priori signal to noise ratio (SNR) estimator, and a speech absence probability (SAP) estimator. We investigate the relationship between speech recognition accuracy and the roles of each module. Simulation results show that the Wiener filter outperforms other gain functions such as minimum mean square error-short time spectral amplitude (MMSE-STSA) and minimum mean square error-log spectral amplitude (MMSE-LSA) estimators when a perfect noise estimator is applied. When the performance of the noise estimator degrades, however, MMSE methods including the decision directed module to estimate a priori SNR and the SAP estimation module helps to improve the performance of the enhancement algorithm for speech recognition systems.

Single-Channel Non-Causal Speech Enhancement to Suppress Reverberation and Background Noise

  • Song, Myung-Suk;Kang, Hong-Goo
    • 한국음향학회지
    • /
    • 제31권8호
    • /
    • pp.487-506
    • /
    • 2012
  • This paper proposes a speech enhancement algorithm to improve the speech intelligibility by suppressing both reverberation and background noise. The algorithm adopts a non-causal single-channel minimum variance distortionless response (MVDR) filter to exploit an additional information that is included in the noisy-reverberant signals in subsequent frames. The noisy-reverberant signals are decomposed into the parts of the desired signal and the interference that is not correlated to the desired signal. Then, the filter equation is derived based on the MVDR criterion to minimize the residual interference without bringing speech distortion. The estimation of the correlation parameter, which plays an important role to determine the overall performance of the system, is mathematically derived based on the general statistical reverberation model. Furthermore, the practical implementation methods to estimate sub-parameters required to estimate the correlation parameter are developed. The efficiency of the proposed enhancement algorithm is verified by performance evaluation. From the results, the proposed algorithm achieves significant performance improvement in all studied conditions and shows the superiority especially for the severely noisy and strongly reverberant environment.

Push-to-talk 통신을 위한 진폭 및 위상 복원 기반의 단일 채널 음성 향상 방식 (A single-channel speech enhancement method based on restoration of both spectral amplitudes and phases for push-to-talk communication)

  • 조혜승;김형국
    • 한국음향학회지
    • /
    • 제36권1호
    • /
    • pp.64-69
    • /
    • 2017
  • 본 논문에서는 PTT(Push-To-Talk) 기반의 무선 통신을 위한 진폭 및 위상 복원 기반의 단일 채널 음성 향상 방식을 제안한다. 제안한 방식은 신호의 진폭만을 대상으로 음성 향상을 진행했던 기존의 방식들과 달리, 음성 신호의 진폭과 위상을 분리하여 각각 향상시켜 다시 결합함으로써 더욱 양질의 음성을 제공한다. 본 논문에서 제안하는 방식의 성능을 평가하기 위해 동적 잡음 환경에서의 단계별 비교 실험을 실시하였으며, 실험 결과를 통해 제안한 방식이 다양한 잡음 환경에서 양질의 음성을 제공하는 것을 확인할 수 있다.

A Single Channel Speech Enhancement for Automatic Speech Recognition

  • 이진규;서현손;강홍구
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2011년도 하계학술대회
    • /
    • pp.85-88
    • /
    • 2011
  • This paper describes a single channel speech enhancement as the pre-processor of automatic speech recognition system. The improvements are based on using optimally modified log-spectra (OM-LSA) gain function with a non-causal a priori signal-to-noise ratio (SNR) estimation. Experimental results show that the proposed method gives better perceptual evaluation of speech quality score (PESQ) and lower log-spectral distance, and also better word accuracy. In the enhancement system, parameters was turned for automatic speech recognition.

  • PDF

On Effective Dual-Channel Noise Reduction for Speech Recognition in Car Environment

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • 음성과학
    • /
    • 제11권1호
    • /
    • pp.43-52
    • /
    • 2004
  • This paper concerns an effective dual-channel noise reduction method to increase the performance of speech recognition in a car environment. While various single channel methods have already been developed and dual-channel methods have been studied somewhat, their effectiveness in real environments, such as in cars, has not yet been formally proven in terms of achieving acceptable performance level. Our aim is to remedy the low performance of the single and dual-channel noise reduction methods. This paper proposes an effective dual-channel noise reduction method based on a high-pass filter and front-end processing of the eigendecomposition method. We experimented with a real multi-channel car database and compared the results with respect to the microphones arrangements. From the analysis and results, we show that the enhanced eigendecomposition method combined with high-pass filter indeed significantly improve the speech recognition performance under a dual-channel environment.

  • PDF

TMS320C30을 이용한 단일채널 적응잡음제거기 구현 (Implementation of the single channel adaptive noise canceller using TMS320C30)

  • 정성윤;우세정;손창희;배건성
    • 음성과학
    • /
    • 제8권2호
    • /
    • pp.73-81
    • /
    • 2001
  • In this paper, we focus on the real time implementation of the single channel adaptive noise canceller(ANC) by using TMS320C30 EVM board. The implemented single channel adaptive noise canceller is based on a reference paper [1] in which it is simulated by using the recursive average magnitude difference function(AMDF) to get a properly delayed input speech on a sample basis as a reference signal and normalized least mean square(NLMS) algorithm. To certify results of the real time implementation, we measured the processing time of the ANC and enhancement ratio according to various signalto-noise ratios(SNRs). Experimental results demonstrate that the processing time of the speech signal of 32ms length with delay estimation of every 10 samples is about 26.3 ms, and almost the same performance as given in [1] is obtained with the implemented system.

  • PDF

Speech Enhancement Using Phase-Dependent A Priori SNR Estimator in Log-Mel Spectral Domain

  • Lee, Yun-Kyung;Park, Jeon Gue;Lee, Yun Keun;Kwon, Oh-Wook
    • ETRI Journal
    • /
    • 제36권5호
    • /
    • pp.721-729
    • /
    • 2014
  • We propose a novel phase-based method for single-channel speech enhancement to extract and enhance the desired signals in noisy environments by utilizing the phase information. In the method, a phase-dependent a priori signal-to-noise ratio (SNR) is estimated in the log-mel spectral domain to utilize both the magnitude and phase information of input speech signals. The phase-dependent estimator is incorporated into the conventional magnitude-based decision-directed approach that recursively computes the a priori SNR from noisy speech. Additionally, we reduce the performance degradation owing to the one-frame delay of the estimated phase-dependent a priori SNR by using a minimum mean square error (MMSE)-based and maximum a posteriori (MAP)-based estimator. In our speech enhancement experiments, the proposed phase-dependent a priori SNR estimator is shown to improve the output SNR by 2.6 dB for both the MMSE-based and MAP-based estimator cases as compared to a conventional magnitude-based estimator.

음성 통계 모형에 따른 음성 왜곡량 감소를 위한 비선형 음성강조법 (Nonlinear Speech Enhancement Method for Reducing the Amount of Speech Distortion According to Speech Statistics Model)

  • 최재승
    • 한국전자통신학회논문지
    • /
    • 제16권3호
    • /
    • pp.465-470
    • /
    • 2021
  • 잡음이 존재하는 실제 환경에서 음성인식을 실시하는 경우에 음성인식의 성능 열화 및 음성의 품질이 저화되지 않는 강건한 음성인식 기술이 필요하다. 이러한 음성인식 기술을 개발함으로써 사람의 음성 스펙트럼과 유사한 잡음 환경에서도 안정되고 높은 음성인식률이 실현되는 어플리케이션이 요구된다. 따라서 본 논문에서는 최소 평균 제곱의 오차를 기반으로 한 단시간 스펙트럼 진폭 방법인 MMSA-STSA 추정 알고리즘에 기초한 잡음억압을 처리하는 음성강조 알고리즘을 제안한다. 이 알고리즘은 단일 채널 입력에 기초한 효과적인 비선형 음성강조 알고리즘이며, 높은 잡음억제 성능을 가지고 있으며 음성의 통계적인 모델에 기초하여 음성의 왜곡량을 줄이는 기법이다. 본 실험에서는 MMSA-STSA 추정 알고리즘의 유효성을 확인하기 위하여 입력 음성파형과 출력 음성파형을 비교하여 제안한 알고리즘의 효과를 확인한다.

다채널 위너 필터의 주성분 부공간 벡터 보정을 통한 잡음 제거 성능 개선 (Improved speech enhancement of multi-channel Wiener filter using adjustment of principal subspace vector)

  • 김기백
    • 한국음향학회지
    • /
    • 제39권5호
    • /
    • pp.490-496
    • /
    • 2020
  • 본 논문에서는 잡음 환경에서 다채널 위너 필터의 성능을 향상시키기 위한 방법을 제안한다. 부공간(subspace) 기반의 다채널 위너 필터를 설계하는 경우, 목적 신호가 단일 음원인 경우는 음성 상관 행렬의 주성분 부공간에서 음성 성분을 추정할 수 있다. 이 때, 음성 상관 행렬은 음성과 간섭 잡음의 교차 상관도가 음성 상관 행렬에 비해 무시할만한 수준이라는 가정하에 신호 상관 행렬에서 간섭 잡음의 상관 행렬을 차감하여 추정하게 된다. 그러나 간섭 잡음 수준이 높아지게 되면 이러한 가정이 더 이상 유효하지 않게 되며 이에 따라 주성분 부공간 추정 오차도 증가하게 된다. 본 연구에서는 음성 존재 확률과 목적 신호의 방향 벡터를 이용하여 주성분 부공간을 보정하는 방법을 제안한다. 주성분 부공간에서 다채널 음성 존재 확률을 유도하고 주성분 부공간 벡터를 보정하는데 적용하였다. 실험을 통해 제안하는 방법이 잡음 환경에서 다채널 위너 필터의 성능을 향상시키는 것을 확인할 수 있다.

잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델 (Multi-resolution DenseNet based acoustic models for reverberant speech recognition)

  • 박순찬;정용원;김형순
    • 말소리와 음성과학
    • /
    • 제10권1호
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.