• Title/Summary/Keyword: speech enhancement

Search Result 340, Processing Time 0.031 seconds

Adaptive Threshold for Speech Enhancement in Nonstationary Noisy Environments (비정상 잡음환경에서 음질향상을 위한 적응 임계 치 알고리즘)

  • Lee, Soo-Jeong;Kim, Sun-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.7
    • /
    • pp.386-393
    • /
    • 2008
  • This paper proposes a new approach for speech enhancement in highly nonstationary noisy environments. The spectral subtraction (SS) is a well known technique for speech enhancement in stationary noisy environments. However, in real world, noise is mostly nonstationary. The proposed method uses an auto control parameter for an adaptive threshold to work well in highly nonstationary noisy environments. Especially, the auto control parameter is affected by a linear function associated with an a posteriori signal to noise ratio (SNR) according to the increase or the decrease of the noise level. The proposed algorithm is combined with spectral subtraction (SS) using a hangover scheme (HO) for speech enhancement. The performances of the proposed method are evaluated ITU-T P.835 signal distortion (SIG) and the segment signal to-noise ratio (SNR) in various and highly nonstationary noisy environments and is superior to that of conventional spectral subtraction (SS) using a hangover (HO) and SS using a minimum statistics (MS) methods.

A NMF-Based Speech Enhancement Method Using a Prior Time Varying Information and Gain Function (시간 변화에 따른 사전 정보와 이득 함수를 적용한 NMF 기반 음성 향상 기법)

  • Kwon, Kisoo;Jin, Yu Gwang;Bae, Soo Hyun;Kim, Nam Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.6
    • /
    • pp.503-511
    • /
    • 2013
  • This paper presents a speech enhancement method using non-negative matrix factorization. In training phase, we can obtain each basis matrix from speech and specific noise database. After training phase, the noisy signal is separated from the speech and noise estimate using basis matrix in enhancement phase. In order to improve the performance, we model the change of encoding matrix from training phase to enhancement phase using independent Gaussian distribution models, and then use the constraint of the objective function almost same as that of the above Gaussian models. Also, we perform a smoothing operation to the encoding matrix by taking into account previous value. Last, we apply the Log-Spectral Amplitude type algorithm as gain function.

A study on skip-connection with time-frequency self-attention for improving speech enhancement based on complex-valued spectrum (복소 스펙트럼 기반 음성 향상의 성능 향상을 위한 time-frequency self-attention 기반 skip-connection 기법 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.2
    • /
    • pp.94-101
    • /
    • 2023
  • A deep neural network composed of encoders and decoders, such as U-Net, used for speech enhancement, concatenates the encoder to the decoder through skip-connection. Skip-connection helps reconstruct the enhanced spectrum and complement the lost information. The features of the encoder and the decoder connected by the skip-connection are incompatible with each other. In this paper, for complex-valued spectrum based speech enhancement, Self-Attention (SA) method is applied to skip-connection to transform the feature of encoder to be compatible with the features of decoder. SA is a technique in which when generating an output sequence in a sequence-to-sequence tasks the weighted average of input is used to put attention on subsets of input, showing that noise can be effectively eliminated by being applied in speech enhancement. The three models using encoder and decoder features to apply SA to skip-connection are studied. As experimental results using TIMIT database, the proposed methods show improvements in all evaluation metrics compared to the Deep Complex U-Net (DCUNET) with skip-connection only.

Speech Enhancement for Voice commander in Car environment (차량환경에서 음성명령어기 사용을 위한 음성개선방법)

  • 백승권;한민수;남승현;이봉호;함영권
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2004
  • In this paper, we present a speech enhancement method as a pre-processor for voice commander under car environment. For the friendly and safe use of voice commander in a running car, non-stationary audio signals such as music and non-candidate speech should be reduced. Ow technique is a two microphone-based one. It consists of two parts Blind Source Separation (BSS) and Kalman filtering. Firstly, BSS is operated as a spatial filter to deal with non-stationary signals and then car noise is reduced by kalman filtering as a temporal filter. Algorithm Performance is tested for speech recognition. And the results show that our two microphone-based technique can be a good candidate to a voice commander.

Speech Enhancement Using Nonnegative Matrix Factorization with Temporal Continuity (시간 연속성을 갖는 비음수 행렬 분해를 이용한 음질 개선)

  • Nam, Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.240-246
    • /
    • 2015
  • In this paper, speech enhancement using nonnegative matrix factorization with temporal continuity has been addressed. Speech and noise signals are modeled as Possion distributions, and basis vectors and gain vectors of NMF are modeled as Gamma distributions. Temporal continuity of the gain vector is known to be critical to the quality of enhanced speech signals. In this paper, temporal continiuty is implemented by adopting Gamma-Markov chain priors for noise gain vectors during the separation phase. Simulation results show that the Gamma-Markov chain models temporal continuity of noise signals and track changes in noise effectively.

An Adaptive Speech Enhancement System Using Lateral Inhibition and Time-Delay Neural Network (상호억제와 시간지연 신경회로망을 사용한 적응적인 음성강조시스템)

  • Choi, Jae-Seung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.95-102
    • /
    • 2008
  • This paper proposes an adaptive speech enhancement system based on an auditory system to enhance speech that is degraded by various background noises. As such, the proposed system detects voiced and unvoiced sections, adaptively adjusts the coefficients for both the lateral inhibition and the amplitude component according to the detected sections for each input fame, then reduces the noise signal using a time-delay neural network. Based on measuring the signal-to-noise ratio, experiments confirm that the proposed system is effective for speech degraded by various noises.

Efficient Speech Enhancement based on left-right HMM with State Sequence Decision Using LRT (좌-우향 은닉 마코프 모델에서 상태결정을 이용한 음질향상)

  • 이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.47-53
    • /
    • 2004
  • We propose a new speech enhancement algorithm based on left-right Hidden Markov Model (HMM) with state decision using Log-likelihood Ratio Test (LRT). Since the conventional HMM-based speech enhancement methods try to improve speech quality for all states, they introduce huge computational loads inappropriate to real-time implementation. In the left-right HMM, only the current and the next state are considered for a possible state transition so to reduce the computational complexity. In this paper, we propose a method to decide the current state by using the LRT on the previous state. Experimental results show that the proposed method improves the speed up to 60% with 0.2∼0.4 dB degradation of speech quality compared to the conventional method.

Performance comparison evaluation of real and complex networks for deep neural network-based speech enhancement in the frequency domain (주파수 영역 심층 신경망 기반 음성 향상을 위한 실수 네트워크와 복소 네트워크 성능 비교 평가)

  • Hwang, Seo-Rim;Park, Sung Wook;Park, Youngcheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.30-37
    • /
    • 2022
  • This paper compares and evaluates model performance from two perspectives according to the learning target and network structure for training Deep Neural Network (DNN)-based speech enhancement models in the frequency domain. In this case, spectrum mapping and Time-Frequency (T-F) masking techniques were used as learning targets, and a real network and a complex network were used for the network structure. The performance of the speech enhancement model was evaluated through two objective evaluation metrics: Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) depending on the scale of the dataset. Test results show the appropriate size of the training data differs depending on the type of networks and the type of dataset. In addition, they show that, in some cases, using a real network may be a more realistic solution if the number of total parameters is considered because the real network shows relatively higher performance than the complex network depending on the size of the data and the learning target.

Noise Suppression Using Normalized Time-Frequency Bin Average and Modified Gain Function for Speech Enhancement in Nonstationary Noisy Environments

  • Lee, Soo-Jeong;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.1-10
    • /
    • 2008
  • A noise suppression algorithm is proposed for nonstationary noisy environments. The proposed algorithm is different from the conventional approaches such as the spectral subtraction algorithm and the minimum statistics noise estimation algorithm in that it classifies speech and noise signals in time-frequency bins. It calculates the ratio of the variance of the noisy power spectrum in time-frequency bins to its normalized time-frequency average. If the ratio is greater than an adaptive threshold, speech is considered to be present. Our adaptive algorithm tracks the threshold and controls the trade-off between residual noise and distortion. The estimated clean speech power spectrum is obtained by a modified gain function and the updated noisy power spectrum of the time-frequency bin. This new algorithm has the advantages of simplicity and light computational load for estimating the noise. This algorithm reduces the residual noise significantly, and is superior to the conventional methods.

Denoising of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 잡음제거)

  • 한미경;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.27-34
    • /
    • 2000
  • This paper deals with speech enhancement methods using the wavelet transform. A cycle-spinning scheme and undecimated wavelet transform are used for denoising of speech signals, and then their results are compared with that of the conventional wavelet transform. We apply soft-thresholding technique for removing additive background noise from noisy speech. The symlets 8-tap wavelet and pyramid algorithm are used for the wavelet transform. Performance assessments based on average SNR, cepstral distance and informal subjective listening test are carried out. Experimental results demonstrate that both cycle-spinning denoising(CSD) method and undecimated wavelet denoising(CWD) method outperform conventional wavelet denoising(UWD) method in objective performance measure as welt as subjective listening test. The two methods also show less "clicks" that usually appears in the neighborhood of signal discontinuities.

  • PDF