• Title/Summary/Keyword: Speech Separation

Search Result 88, Processing Time 0.025 seconds

Speech Enhancement Using Receding Horizon FIR Filtering

  • Kim, Pyung-Soo;Kwon, Wook-Hyu;Kwon, Oh-Kyu
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.1
    • /
    • pp.7-12
    • /
    • 2000
  • A new speech enhancement algorithm for speech corrupted by slowly varying additive colored noise is suggested based on a state-space signal model. Due to the FIR structure and the unimportance of long-term past information, the receding horizon (RH) FIR filter known to be a best linear unbiased estimation (BLUE) filter is utilized in order to obtain noise-suppressed speech signal. As a special case of the colored noise problem, the suggested approach is generalized to perform the single blind signal separation of two speech signals. It is shown that the exact speech signal is obtained when an incoming speech signal is noise-free.

  • PDF

A Study on Speech Separation using Sinusoidal Model and Psycoacoustics Model (정현파 모델과 사이코어쿠스틱스 모델을 이용한 음성 분리에 관한 연구)

  • Hwang, Sun-Il;Han, Doo-Jin;Kwon, Chul-Hyun;Shin, Dae-Kyu;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2622-2624
    • /
    • 2001
  • In this thesis, speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. This paper proposed the method that separated the summed speeches and proved the similarity between the signals by the cross correlation between the signals for exact between original signal and separated signal. This paper uses frequency sampling method based on sinusoidal model to separate the composite signal with vocalic speech and vocalic speech and noise masking method based on psycoacoustics model to separate the composite signal with vocalic speech and nonvocalic speech.

  • PDF

A Study on Speech Separation in Cochannel using Sinusoidal Model (Sinusoidal Model을 이용한 Cochannel상에서의 음성분리에 관한 연구)

  • Park, Hyun-Gyu;Shin, Joong-In;Park, Sang-Hee
    • Proceedings of the KIEE Conference
    • /
    • 1997.11a
    • /
    • pp.597-599
    • /
    • 1997
  • Cochannel speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. Cochannel speech occurs in many common situations such as when two AM signals containing speech are transmitted on the same frequency or when two people are speaking simultaneously (e. g., when talking on the telephone). In this paper, the method that separated the speech in such a situation is proposed. Especially, only the voiced sound of few sound states is separated. And the similarity of the signals by the cross correlation between the signals for exactness of original signal and separated signal is proved.

  • PDF

Separation of Periodic and Aperiodic Components of Pathological Speech Signal (장애음성의 주기성분과 잡음성분의 분리 방법에 관하여)

  • Jo Cheolwoo;Li Tao
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.25-28
    • /
    • 2003
  • The aim of this paper is to analyze the pathological voice by separating signal into periodic and aperiodic part. Separation was peformed recursively from the residual signal of voice signal. Based on initial estimation of aperiodic part of spectrum, aperiodic part is decided from the extrapolation method. Periodic part is decided by subtracting aperiodic part from the original spectrum. A parameter HNR is derived based on the separation. Parameter value statistics are compared with those of Jitter and Shimmer for normal, benign and malignant cases.

  • PDF

A New Formulation of Multichannel Blind Deconvolution: Its Properties and Modifications for Speech Separation

  • Nam, Seung-Hyon;Jee, In-Nho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4E
    • /
    • pp.148-153
    • /
    • 2006
  • A new normalized MBD algorithm is presented for nonstationary convolutive mixtures and its properties/modifications are discussed in details. The proposed algorithm normalizes the signal spectrum in the frequency domain to provide faster stable convergence and improved separation without whitening effect. Modifications such as nonholonomic constraints and off-diagonal learning to the proposed algorithm are also discussed. Simulation results using a real-world recording confirm superior performanceof the proposed algorithm and its usefulness in real world applications.

Target Speaker Speech Restoration via Spectral bases Learning (주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원)

  • Park, Sun-Ho;Yoo, Ji-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.179-186
    • /
    • 2009
  • This paper proposes a target speech extraction which restores speech signal of a target speaker form noisy convolutive mixture of speech and an interference source. We assume that the target speaker is known and his/her utterances are available in the training time. Incorporating the additional information extracted from the training utterances into the separation, we combine convolutive blind source separation(CBSS) and non-negative decomposition techniques, e.g., probabilistic latent variable model. The nonnegative decomposition is used to learn a set of bases from the spectrogram of the training utterances, where the bases represent the spectral information corresponding to the target speaker. Based on the learned spectral bases, our method provides two postprocessing steps for CBSS. Channel selection step finds a desirable output channel from CBSS, which dominantly contains the target speech. Reconstruct step recovers the original spectrogram of the target speech from the selected output channel so that the remained interference source and background noise are suppressed. Experimental results show that our method substantially improves the separation results of CBSS and, as a result, successfully recovers the target speech.

Iterative Computation of Periodic and Aperiodic Part from Speech Signal (음성 신호로부터 주기, 비주기 성분의 반복적 계산법에 의한 분리 실험)

  • Jo Cheol-Woo;Lee Tao
    • MALSORI
    • /
    • no.48
    • /
    • pp.117-126
    • /
    • 2003
  • source of speech signal is actually composed of combination of periodic and aperiodic components, although it is often modeled to either one of those. In the paper an experiment which can separate periodic and aperiodic components from speech source. Linear predictive residual signal was used as a approximated vocal source the original speech to obtain the estimated aperiodic part. Iterative extrapolation method was used to compute the aperiodic part.

  • PDF

Multi-channel Speech Enhancement Using Blind Source Separation and Cross-channel Wiener Filtering

  • Jang, Gil-Jin;Choi, Chang-Kyu;Lee, Yong-Beom;Kim, Jeong-Su;Kim, Sang-Ryong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.56-67
    • /
    • 2004
  • Despite abundant research outcomes of blind source separation (BSS) in many types of simulated environments, their performances are still not satisfactory to be applied to the real environments. The major obstacle may seem the finite filter length of the assumed mixing model and the nonlinear sensor noises. This paper presents a two-step speech enhancement method with multiple microphone inputs. The first step performs a frequency-domain BSS algorithm to produce multiple outputs without any prior knowledge of the mixed source signals. The second step further removes the remaining cross-channel interference by a spectral cancellation approach using a probabilistic source absence/presence detection technique. The desired primary source is detected every frame of the signal, and the secondary source is estimated in the power spectral domain using the other BSS output as a reference interfering source. Then the estimated secondary source is subtracted to reduce the cross-channel interference. Our experimental results show good separation enhancement performances on the real recordings of speech and music signals compared to the conventional BSS methods.

Application of Block On-Line Blind Source Separation to Acoustic Echo Cancellation

  • Ngoc, Duong Q.K.;Park, Chul;Nam, Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.17-24
    • /
    • 2008
  • Blind speech separation (BSS) is well-known as a powerful technique for speech enhancement in many real world environments. In this paper, we propose a new application of BSS - acoustic echo cancellation (AEC) in a car environment. For this purpose, we develop a block-online BSS algorithm which provides robust separation than a batch version in changing environments with moving speakers. Simulation results using real world recordings show that the block-online BSS algorithm is very robust to speaker movement. When combined with AEC, simulation results using real audio recording in a car confirm the expectation that BSS improves double talk detection and echo suppression.

Remote speech recognition preprocessing system for intelligent robot in noisy environment (지능로봇에 적합한 잡음 환경에서의 원거리 음성인식 전처리 시스템)

  • Gwon, Se-Do;Jeong, Hong
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.365-366
    • /
    • 2006
  • This paper describes a pre-processing methodology which can apply to remote speech recognition system of service robot in noisy environment. By combining beamforming and blind source separation, we can overcome the weakness of beamforming (reverberation) and blind source separation (distributed noise, permutation ambiguity). As this method is designed to be implemented with hardware, we can achieve real-time execution with FPGA by using systolic array architecture.

  • PDF