• 제목/요약/키워드: speech separation

검색결과 88건 처리시간 0.024초

Speech Enhancement Using Receding Horizon FIR Filtering

  • Kim, Pyung-Soo;Kwon, Wook-Hyu;Kwon, Oh-Kyu
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권1호
    • /
    • pp.7-12
    • /
    • 2000
  • A new speech enhancement algorithm for speech corrupted by slowly varying additive colored noise is suggested based on a state-space signal model. Due to the FIR structure and the unimportance of long-term past information, the receding horizon (RH) FIR filter known to be a best linear unbiased estimation (BLUE) filter is utilized in order to obtain noise-suppressed speech signal. As a special case of the colored noise problem, the suggested approach is generalized to perform the single blind signal separation of two speech signals. It is shown that the exact speech signal is obtained when an incoming speech signal is noise-free.

  • PDF

정현파 모델과 사이코어쿠스틱스 모델을 이용한 음성 분리에 관한 연구 (A Study on Speech Separation using Sinusoidal Model and Psycoacoustics Model)

  • 황선일;한두진;귄철현;신대규;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2622-2624
    • /
    • 2001
  • In this thesis, speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. This paper proposed the method that separated the summed speeches and proved the similarity between the signals by the cross correlation between the signals for exact between original signal and separated signal. This paper uses frequency sampling method based on sinusoidal model to separate the composite signal with vocalic speech and vocalic speech and noise masking method based on psycoacoustics model to separate the composite signal with vocalic speech and nonvocalic speech.

  • PDF

Sinusoidal Model을 이용한 Cochannel상에서의 음성분리에 관한 연구 (A Study on Speech Separation in Cochannel using Sinusoidal Model)

  • 박현규;신중인;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1997년도 추계학술대회 논문집 학회본부
    • /
    • pp.597-599
    • /
    • 1997
  • Cochannel speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. Cochannel speech occurs in many common situations such as when two AM signals containing speech are transmitted on the same frequency or when two people are speaking simultaneously (e. g., when talking on the telephone). In this paper, the method that separated the speech in such a situation is proposed. Especially, only the voiced sound of few sound states is separated. And the similarity of the signals by the cross correlation between the signals for exactness of original signal and separated signal is proved.

  • PDF

장애음성의 주기성분과 잡음성분의 분리 방법에 관하여 (Separation of Periodic and Aperiodic Components of Pathological Speech Signal)

  • 조철우;리타오
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.25-28
    • /
    • 2003
  • The aim of this paper is to analyze the pathological voice by separating signal into periodic and aperiodic part. Separation was peformed recursively from the residual signal of voice signal. Based on initial estimation of aperiodic part of spectrum, aperiodic part is decided from the extrapolation method. Periodic part is decided by subtracting aperiodic part from the original spectrum. A parameter HNR is derived based on the separation. Parameter value statistics are compared with those of Jitter and Shimmer for normal, benign and malignant cases.

  • PDF

A New Formulation of Multichannel Blind Deconvolution: Its Properties and Modifications for Speech Separation

  • Nam, Seung-Hyon;Jee, In-Nho
    • The Journal of the Acoustical Society of Korea
    • /
    • 제25권4E호
    • /
    • pp.148-153
    • /
    • 2006
  • A new normalized MBD algorithm is presented for nonstationary convolutive mixtures and its properties/modifications are discussed in details. The proposed algorithm normalizes the signal spectrum in the frequency domain to provide faster stable convergence and improved separation without whitening effect. Modifications such as nonholonomic constraints and off-diagonal learning to the proposed algorithm are also discussed. Simulation results using a real-world recording confirm superior performanceof the proposed algorithm and its usefulness in real world applications.

주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원 (Target Speaker Speech Restoration via Spectral bases Learning)

  • 박선호;유지호;최승진
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권3호
    • /
    • pp.179-186
    • /
    • 2009
  • 본 논문에서는 학습이 가능한 특정화자의 발화음성이 있는 경우, 잡음과 반향이 있는 실 환경에서의 스테레오 마이크로폰을 이용한 특정화자 음성복원 알고리즘을 제안한다. 이를 위해 반향이 있는 환경에서 음원들을 분리하는 다중경로 암묵음원분리(convolutive blind source separation, CBSS)와 이의 후처리 방법을 결합함으로써, 잡음이 섞인 다중경로 신호로부터 잡음과 반향을 제거하고 특정화자의 음성만을 복원하는 시스템을 제시한다. 즉, 비음수 행렬분해(non-negative matrix factorization, NMF) 방법을 이용하여 특정화자의 학습음성으로부터 주파수 특성을 보존하는 기저벡터들을 학습하고, 이 기저벡터들에 기반 한 두 단계의 후처리 기법들을 제안한다. 먼저 본 시스템의 중간단계인 CBSS가 다중경로 신호를 입력받아 독립음원들을(두 채널) 출력하고, 이 두 채널 중 특정화자의 음성에 보다 가까운 채널을 자동적으로 선택한다(채널선택 단계). 이후 앞서 선택된 채널의 신호에 남아있는 잡음과 다른 방해음원(interference source)을 제거하여 특정화자의 음성만을 복원, 최종적으로 잡음과 반향이 제거된 특정화자의 음성을 복원한다(복원 단계). 이 두 후처리 단계 모두 특정화자 음성으로부터 학습한 기저벡터들을 이용하여 동작하므로 특정화자의 음성이 가지는 고유의 주파수 특성 정보를 효율적으로 음성복원에 이용 할 수 있다. 이로써 본 논문은 CBSS에 음원의 사전정보를 결합하는 방법을 제시하고 기존의 CBSS의 분리 결과를 향상시키는 동시에 특정화자만의 음성을 복원하는 시스템을 제안한다. 실험을 통하여 본 제안 방법이 잡음과 반향 환경에서 특정화자의 음성을 성공적으로 복원함을 확인할 수 있다.

음성 신호로부터 주기, 비주기 성분의 반복적 계산법에 의한 분리 실험 (Iterative Computation of Periodic and Aperiodic Part from Speech Signal)

  • 조철우;리타오
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.117-126
    • /
    • 2003
  • source of speech signal is actually composed of combination of periodic and aperiodic components, although it is often modeled to either one of those. In the paper an experiment which can separate periodic and aperiodic components from speech source. Linear predictive residual signal was used as a approximated vocal source the original speech to obtain the estimated aperiodic part. Iterative extrapolation method was used to compute the aperiodic part.

  • PDF

Multi-channel Speech Enhancement Using Blind Source Separation and Cross-channel Wiener Filtering

  • Jang, Gil-Jin;Choi, Chang-Kyu;Lee, Yong-Beom;Kim, Jeong-Su;Kim, Sang-Ryong
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.56-67
    • /
    • 2004
  • Despite abundant research outcomes of blind source separation (BSS) in many types of simulated environments, their performances are still not satisfactory to be applied to the real environments. The major obstacle may seem the finite filter length of the assumed mixing model and the nonlinear sensor noises. This paper presents a two-step speech enhancement method with multiple microphone inputs. The first step performs a frequency-domain BSS algorithm to produce multiple outputs without any prior knowledge of the mixed source signals. The second step further removes the remaining cross-channel interference by a spectral cancellation approach using a probabilistic source absence/presence detection technique. The desired primary source is detected every frame of the signal, and the secondary source is estimated in the power spectral domain using the other BSS output as a reference interfering source. Then the estimated secondary source is subtracted to reduce the cross-channel interference. Our experimental results show good separation enhancement performances on the real recordings of speech and music signals compared to the conventional BSS methods.

Application of Block On-Line Blind Source Separation to Acoustic Echo Cancellation

  • Ngoc, Duong Q.K.;Park, Chul;Nam, Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • 제27권1E호
    • /
    • pp.17-24
    • /
    • 2008
  • Blind speech separation (BSS) is well-known as a powerful technique for speech enhancement in many real world environments. In this paper, we propose a new application of BSS - acoustic echo cancellation (AEC) in a car environment. For this purpose, we develop a block-online BSS algorithm which provides robust separation than a batch version in changing environments with moving speakers. Simulation results using real world recordings show that the block-online BSS algorithm is very robust to speaker movement. When combined with AEC, simulation results using real audio recording in a car confirm the expectation that BSS improves double talk detection and echo suppression.

지능로봇에 적합한 잡음 환경에서의 원거리 음성인식 전처리 시스템 (Remote speech recognition preprocessing system for intelligent robot in noisy environment)

  • 권세도;정홍
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.365-366
    • /
    • 2006
  • This paper describes a pre-processing methodology which can apply to remote speech recognition system of service robot in noisy environment. By combining beamforming and blind source separation, we can overcome the weakness of beamforming (reverberation) and blind source separation (distributed noise, permutation ambiguity). As this method is designed to be implemented with hardware, we can achieve real-time execution with FPGA by using systolic array architecture.

  • PDF