• Title/Summary/Keyword: speech enhancement

Search Result 340, Processing Time 0.029 seconds

Speech Enhancement Based on Psychoacoustic Model (심리음향모델에 근거한 음성개선)

  • Lee Jingeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.337-338
    • /
    • 2000
  • The perceptual filter for speech enhancement was analytically derived where the frequency content of the input noisy signal was made the same as that of the estimated clean signal in auditory domain. However, the analytical derivation should rely on the deconvolution associated with the spreading function in the psychoacoustic model, which results in an ill-conditioned problem. In order to cope with the problem associated with the deconvolution, we propose a novel psychoacoustic model based speech enhancement filter whose principle is the same as the perceptual filter, however the filter is derived by a constrained optimization which provides solutions to the ill-conditioned problem.

  • PDF

Filtering of a Dissonant Frequency Combined with Noise Reduction for Speech Enhancement (잡음 감소와 불협화음 제거를 통한 음성신호 향상)

  • Sangki Kang;Lee, Youn-Jeong;Lee, Ki-Yong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1E
    • /
    • pp.16-18
    • /
    • 2004
  • There have been numerous studies on the enhancement of the noisy speech signal. In this paper, I propose a completely new speech enhancement method, that is, a filtering of a dissonant frequency combined with noise reduction algorithm. The simulation results indicate that the proposed method provides a significant gain in audible improvement compared with the conventional method. Therefore if the proposed enhancement scheme is used as a pre-filter, the perceptual quality of speech is greatly enhanced.

Speech enhancement using psychoacoustics model (사이코어쿠스틱스 모델을 이용한 음성 향상)

  • Kwon, Chul-Hyun;Shin, Dae-Kyu;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 1999.11c
    • /
    • pp.748-750
    • /
    • 1999
  • In this study, a speech enhancement is presented based on the utilization of well-known auditory mechanism, noise masking. The speech enhancement approach adopted here is to derive an modifier that achieves audible noise suppression. This modification selectively affects the perceptually significant spectral values, and is therefore less prone to introduction of unwanted distortions than methods that affect the complete STSA and produces more enhanced results at low SNR as well as at high SNR. The speech enhancement method adopted here needs exact estimation of the minimum specteal value per critical band because it uses only the minimum spectral value per critical band. For this, the method adopted here uses the modified spectral subtraction that is more flexible than power spectral subtraction. So, the result in experiment represented better SNR than before.

  • PDF

A study on speech enhancement using complex-valued spectrum employing Feature map Dependent attention gate (특징 맵 중요도 기반 어텐션을 적용한 복소 스펙트럼 기반 음성 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.544-551
    • /
    • 2023
  • Speech enhancement used to improve the perceptual quality and intelligibility of noise speech has been studied as a method using a complex-valued spectrum that can improve both magnitude and phase in a method using a magnitude spectrum. In this paper, a study was conducted on how to apply attention mechanism to complex-valued spectrum-based speech enhancement systems to further improve the intelligibility and quality of noise speech. The attention is performed based on additive attention and allows the attention weight to be calculated in consideration of the complex-valued spectrum. In addition, the global average pooling was used to consider the importance of the feature map. Complex-valued spectrum-based speech enhancement was performed based on the Deep Complex U-Net (DCUNET) model, and additive attention was conducted based on the proposed method in the Attention U-Net model. The results of the experiments on noise speech in a living room environment showed that the proposed method is improved performance over the baseline model according to evaluation metrics such as Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short Time Object Intelligence (STOI), and consistently improved performance across various background noise environments and low Signal-to-Noise Ratio (SNR) conditions. Through this, the proposed speech enhancement system demonstrated its effectiveness in improving the intelligibility and quality of noisy speech.

Noise Reduction Using the Standard Deviation of the Time-Frequency Bin and Modified Gain Function for Speech Enhancement in Stationary and Nonstationary Noisy Environments

  • Lee, Soo-Jeong;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.87-96
    • /
    • 2007
  • In this paper we propose a new noise reduction algorithm for stationary and nonstationary noisy environments. Our algorithm classifies the speech and noise signal contributions in time-frequency bins, and is not based on a spectral algorithm or a minimum statistics approach. It relies on calculating the ratio of the standard deviation of the noisy power spectrum in time-frequency bins to its normalized time-frequency average. We show that good quality can be achieved for enhancement speech signal by choosing appropriate values for ${\delta}_t\;and\;{\delta}_f$. The proposed method greatly reduces the noise while providing enhanced speech with lower residual noise and somewhat higher mean opinion score (MOS), background intrusiveness (BAK) and signal distortion (SIG) scores than conventional methods.

Speech enhancement system using the multi-band coherence function and spectral subtraction method (다중 주파수 밴드 간섭함수와 스펙트럼 차감법을 이용한 음성 향상 시스템)

  • Oh, Inkyu;Lee, Insung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.406-413
    • /
    • 2019
  • This paper proposes a speech enhancement method through the process of combining the gain function with spectrum subtraction method in the two microphone array with close spacing. A speech enhancement method that uses a gain function estimated by the SNR (Signal-to Noise Ratio) based on the multi frequency band coherence function causes the performance degradation in high correlation between input noises of two channels. A new speech enhancement method is proposed where the weighted gain function is used by combining the gain function from the spectral subtraction. The performance evaluation of the proposed method was shown by comparison with PESQ (Perceptual Evaluation of Speech Quality) value which is an objective quality evaluation test provided by the ITU-T (International Telecommunications Union Telecommunication). In the PESQ tests, the maximum 0.217 of PESQ value is improved in the various background noise environments.

A study on deep neural speech enhancement in drone noise environment (드론 소음 환경에서 심층 신경망 기반 음성 향상 기법 적용에 관한 연구)

  • Kim, Jimin;Jung, Jaehee;Yeo, Chaneun;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.342-350
    • /
    • 2022
  • In this paper, actual drone noise samples are collected for speech processing in disaster environments to build noise-corrupted speech database, and speech enhancement performance is evaluated by applying spectrum subtraction and mask-based speech enhancement techniques. To improve the performance of VoiceFilter (VF), an existing deep neural network-based speech enhancement model, we apply the Self-Attention operation and use the estimated noise information as input to the Attention model. Compared to existing VF model techniques, the experimental results show 3.77%, 1.66% and 0.32% improvements for Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligence (STOI), respectively. When trained with a 75% mix of speech data with drone sounds collected from the Internet, the relative performance drop rates for SDR, PESQ, and STOI are 3.18%, 2.79% and 0.96%, respectively, compared to using only actual drone noise. This confirms that data similar to real data can be collected and effectively used for model training for speech enhancement in environments where real data is difficult to obtain.

Speech enhancement method based on feature compensation gain for effective speech recognition in noisy environments (잡음 환경에 효과적인 음성인식을 위한 특징 보상 이득 기반의 음성 향상 기법)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.51-55
    • /
    • 2019
  • This paper proposes a speech enhancement method utilizing the feature compensation gain for robust speech recognition performances in noisy environments. In this paper we propose a speech enhancement method utilizing the feature compensation gain which is obtained from the PCGMM (Parallel Combined Gaussian Mixture Model)-based feature compensation method employing variational model composition. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms and our previous research over various background noise types and SNR (Signal to Noise Ratio) conditions in mismatched ASR (Automatic Speech Recognition) system condition. The computation complexity is significantly reduced by employing the noise model selection technique with maintaining the speech recognition performance at a similar level.

Single-Channel Non-Causal Speech Enhancement to Suppress Reverberation and Background Noise

  • Song, Myung-Suk;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.8
    • /
    • pp.487-506
    • /
    • 2012
  • This paper proposes a speech enhancement algorithm to improve the speech intelligibility by suppressing both reverberation and background noise. The algorithm adopts a non-causal single-channel minimum variance distortionless response (MVDR) filter to exploit an additional information that is included in the noisy-reverberant signals in subsequent frames. The noisy-reverberant signals are decomposed into the parts of the desired signal and the interference that is not correlated to the desired signal. Then, the filter equation is derived based on the MVDR criterion to minimize the residual interference without bringing speech distortion. The estimation of the correlation parameter, which plays an important role to determine the overall performance of the system, is mathematically derived based on the general statistical reverberation model. Furthermore, the practical implementation methods to estimate sub-parameters required to estimate the correlation parameter are developed. The efficiency of the proposed enhancement algorithm is verified by performance evaluation. From the results, the proposed algorithm achieves significant performance improvement in all studied conditions and shows the superiority especially for the severely noisy and strongly reverberant environment.

Noisy Speech Enhancement Based on Complex Laplacian Probability Density Function (복소 라플라시안 확률 밀도 함수에 기반한 음성 향상 기법)

  • Park, Yun-Sik;Jo, Q-Haing;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.111-117
    • /
    • 2007
  • This paper presents a novel approach to speech enhancement based on a complex Laplacian probability density function (pdf). With a use of goodness-of-fit (GOF) test we show that the complex Laplacian pdf is more suitable to describe the conventional Gaussian pdf. The likelihood ratio (LR) is applied to derive the speech absence probability in the speech enhancement algorithm. The performance of the proposed algorithm is evaluated by the objective test and yields better results compared with the conventional Gaussian pdf-based scheme.