• Title/Summary/Keyword: speech enhancement

Search Result 340, Processing Time 0.026 seconds

Speech Enhancement Algorithm Based on Teager Energy and Speech Absence Probability in Noisy Environments (잡음환경에서 Teager 에너지와 음성부재확률 기반의 음성향상 알고리즘)

  • Park, Yun-Sik;An, Hong-Sub;Lee, Sang-Min
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.81-88
    • /
    • 2012
  • In this paper, we propose a novel speech enhancement algorithm for effective noise suppression in various noisy environments. In the proposed method, to result in improved decision performance for speech and noise segments, local speech absence probability (LSAP, local SAP) based on Teager energy of noisy speech is used as the feature parameter for voice activity detection (VAD) in each frequency subband instead of conventional LSAP. In addition, The presented method utilizes global SAP (GSAP) derived in each frame as the weighting parameter for the modification of the adopted TE operator to improve the performance of TE operator. Performances of the proposed algorithm are evaluated by objective test under various environments and better results compared with the conventional methods are obtained.

Speech Enhancement System Using a Model of Auditory Mechanism (청각기강의 모델을 이용한 음성강조 시스템)

  • 최재승
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.295-302
    • /
    • 2004
  • On the field of speech processing the treatment of noise is still important problems for speech research. Especially, it has been noticed that the background noise causes remarkable reduction of speech recognition ratio. As the examples of the background noise, there are such various non-stationary noises existing in the real environment as driving noise of automobiles on the road or typing noise of printer. The treatment for these kinds of noises is not so simple as could be eliminated by the former Wiener filter, but needs more skillful techniques. In this paper as one of these trials, we show an algorithm which is a speech enhancement method using a model of mutual inhibition for noise reduction in speech which is contaminated by white noise or background noise mentioned above. It is confirmed that the proposed algorithm is effective for the speech degraded not only by white noise but also by colored noise, judging from the spectral distortion measurement.

Design of Speech Enhancement U-Net for Embedded Computing (임베디드 연산을 위한 잡음에서 음성추출 U-Net 설계)

  • Kim, Hyun-Don
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.5
    • /
    • pp.227-234
    • /
    • 2020
  • In this paper, we propose wav-U-Net to improve speech enhancement in heavy noisy environments, and it has implemented three principal techniques. First, as input data, we use 128 modified Mel-scale filter banks which can reduce computational burden instead of 512 frequency bins. Mel-scale aims to mimic the non-linear human ear perception of sound by being more discriminative at lower frequencies and less discriminative at higher frequencies. Therefore, Mel-scale is the suitable feature considering both performance and computing power because our proposed network focuses on speech signals. Second, we add a simple ResNet as pre-processing that helps our proposed network make estimated speech signals clear and suppress high-frequency noises. Finally, the proposed U-Net model shows significant performance regardless of the kinds of noise. Especially, despite using a single channel, we confirmed that it can well deal with non-stationary noises whose frequency properties are dynamically changed, and it is possible to estimate speech signals from noisy speech signals even in extremely noisy environments where noises are much lauder than speech (less than SNR 0dB). The performance on our proposed wav-U-Net was improved by about 200% on SDR and 460% on NSDR compared to the conventional Jansson's wav-U-Net. Also, it was confirmed that the processing time of out wav-U-Net with 128 modified Mel-scale filter banks was about 2.7 times faster than the common wav-U-Net with 512 frequency bins as input values.

Speech Enhancement Based on Voice/Unvoice Classification (유성음/무성음 분리를 이용한 잡음처리)

  • 유창동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.374-379
    • /
    • 2002
  • In this paper, a nobel method to reduce noise using voice/unvoice classification is proposed. Voice and unvoice are an important feature of speech and the proposed method processes noisy speech differently for each voice/unvoice part. Speech is classified into voice/unvoice using zero-crossing rate and energy, and a modified speech/noise dominant-decision is proposed based on voice/unvoice classification. The proposed method was tested on conditions of white noise and airplane noise, and on the basis of comparing segmental SNR with the existing method and listening to the enhanced speech, a performance of the proposed method was superior to that of the existing method.

Performance Evaluation of Environmental Noise Reduction Techniques or Hearing Aids (보청기를 위한 배경 잡음 제거 기법의 성능 평가)

  • Park, S.J.;Doh, W.;Shin, S.W.;Youn, D.H.;Kim, D.W.;Park, Y.C.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.83-86
    • /
    • 1997
  • To provide ameliorated aided environment to hearing impaired listeners, background noise reduction techniques are investigated as a front-end of conventional hearing aids, and their effects are tested in a subjective manner. Several speech enhancement schemes were implemented and preference tests or normal listeners are performed to select the best possible scheme or hearing impaired listeners. Results indicated that SDT scores without the speech enhancement scheme drop more sharply as SNR decreases than those with the speech enhancement techniques. SDT scores obtained or hearing impaired listeners with hearing aids showed large variability. However, all impaired listeners preferred noise suppressed sounds to unsuppressed ones.

  • PDF

Histogram Enhancement for Robust Speaker Verification (강인한 화자 확인을 위한 히스토그램 개선 기법)

  • Choi, Jae-Kil;Kwon, Chul-Hong
    • MALSORI
    • /
    • no.63
    • /
    • pp.153-170
    • /
    • 2007
  • It is well known that when there is an acoustic mismatch between the speech obtained during training and testing, the accuracy of speaker verification systems drastically deteriorates. This paper presents the use of MFCCs' histogram enhancement technique in order to improve the robustness of a speaker verification system. The technique transforms the features extracted from speech within an utterance such that their statistics conform to reference distributions. The reference distributions proposed in this paper are uniform distribution and beta distribution. The transformation modifies the contrast of MFCCs' histogram so that the performance of a speaker verification system is improved both in the clean training and testing environment and in the clean training and noisy testing environment.

  • PDF

A Study on the Design of Integrated Speech Enhancement System for Hands-Free Mobile Radiotelephony in a Car

  • Park, Kyu-Sik;Oh, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2E
    • /
    • pp.45-52
    • /
    • 1999
  • This paper presents the integrated speech enhancement system for hands-free mobile communication. The proposed integrated system incorporates both acoustic echo cancellation and engine noise reduction device to provide signal enhancement of desired speech signal from the echoed plus noisy environments. To implement the system, a delayless subband adaptive structure is used for acoustic echo cancellation operation. The NLMS based adaptive noise canceller then applied to the residual echo removed noisy signal to achieve the selective engine noise attenuation in dominant frequency component. Two sets of computer simulations are conducted to demonstrate the effectiveness of the system; one for the fixed acoustical environment condition, the other for the robustness of the system in which, more realistic situation, the acoustic transmission environment change. Simulation results confirm the system performance of 20-25dB ERLE in acoustic echo cancellation and 9-19 dB engine noise attenuation in dominant frequency component for both cases.

  • PDF

Speaker Identification Using an Ensemble of Feature Enhancement Methods (특징 강화 방법의 앙상블을 이용한 화자 식별)

  • Yang, IL-Ho;Kim, Min-Seok;So, Byung-Min;Kim, Myung-Jae;Yu, Ha-Jin
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2011
  • In this paper, we propose an approach which constructs classifier ensembles of various channel compensation and feature enhancement methods. CMN and CMVN are used as channel compensation methods. PCA, kernel PCA, greedy kernel PCA, and kernel multimodal discriminant analysis are used as feature enhancement methods. The proposed ensemble system is constructed with the combination of 15 classifiers which include three channel compensation methods (including 'without compensation') and five feature enhancement methods (including 'without enhancement'). Experimental results show that the proposed ensemble system gives highest average speaker identification rate in various environments (channels, noises, and sessions).

  • PDF

A Novel Speech Enhancement Based on Speech/Noise-dominant Decision in Time-frequency Domain (시간-주파수 영역에서 음성/잡음 우세 결정에 의한 새로운 잡음처리)

  • 윤석현;유창동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.48-55
    • /
    • 2001
  • A novel method to reduce additive non-stationary noise is proposed. The method requires neither the information about noise nor the estimate of the noise statistics from any pause regions. The enhancement is performed on a band-by-band basis for each time frame. Based on both the decision on whether a particular band in a frame is speech or noise dominant and the masking property of the human auditory system, an appropriate amount of noise is reduced using spectral subtraction. The proposed method was tested on various noisy conditions (car noise, Fl6 noise, white Gaussian noise, pink noise, tank noise and babble noise) and on the basis of comparing segmental SNR with spectral subtraction method and visually inspecting the enhanced spectrograms and listening to the enhanced speech, the method was able to effectively reduce various noise while minimizing distortion to speech.

  • PDF

Speech Basis Matrix Using Noise Data and NMF-Based Speech Enhancement Scheme (잡음 데이터를 활용한 음성 기저 행렬과 NMF 기반 음성 향상 기법)

  • Kwon, Kisoo;Kim, Hyung Young;Kim, Nam Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.619-627
    • /
    • 2015
  • This paper presents a speech enhancement method using non-negative matrix factorization (NMF). In the training phase, each basis matrix of source signal is obtained from a proper database, and these basis matrices are utilized for the source separation. In this case, the performance of speech enhancement relies heavily on the basis matrix. The proposed method for which speech basis matrix is made a high reconstruction error for noise signal shows a better performance than the standard NMF which basis matrix is trained independently. For comparison, we propose another method, and evaluate one of previous method. In the experiment result, the performance is evaluated by perceptual evaluation speech quality and signal to distortion ratio, and the proposed method outperformed the other methods.