• Title/Summary/Keyword: speech signal

Search Result 1,175, Processing Time 0.024 seconds

Implementation of Formant Speech Analysis/Synthesis System (포만트 분석/합성 시스템 구현)

  • Lee, Joon-Woo;Son, Ill-Kwon;Bae, Keuo-Sung
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.295-314
    • /
    • 1997
  • In this study, we will implement a flexible formant analysis and synthesis system. In the analysis part, the two-channel (i.e., speech & EGG signals) approach is investigated for accurate estimation of formant information. The EGG signal is used for extracting exact pitch information that is needed for the pitch synchronous LPC analysis and closed phase LPC analysis. In the synthesis part, Klatt formant synthesizer is modified so that the user can change synthesis parameters arbitarily. Experimental results demonstrate the superiority of the two-channel analysis method over the one-channel(speech signal only) method in analysis as well as in synthesis. The implemented system is expected to be very helpful for studing the effects of synthesis parameters on the quality of synthetic speech and for the development of Korean text-to-speech(TTS) system with the formant synthesis method.

  • PDF

A Single Channel Adaptive Noise Cancellation for Speech Signals (음성신호의 단일입력 적응잡음제거)

  • Gahng, Hae-Dong;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.3
    • /
    • pp.16-24
    • /
    • 1994
  • A single channel adaptive noise canceling (ANC) technique is presented for removing effects of additive noise on the speech signal. The conventional method obtains a reference signal using the pitch estimated on a frame basis from the input speech. The proposed method, however, gets the reference signal using the delay estimated recursively on a sample by sample basis. To estimate the delay, we derive recursion formula of autocorrelation function and average magnitude difference function. The performance of the proposed method is evaluated for the speech signals distorted by the additive white Gaussian noise. Experimental results with normalized least mean square (NLMS) adaptive algorithm demonstrate that the proposed method improves the perceived speech quality quite well besides the signal-to-noise ratio.

  • PDF

Time-Frequency Domain Impulsive Noise Detection System in Speech Signal (음성 신호에서의 시간-주파수 축 충격 잡음 검출 시스템)

  • Choi, Min-Seok;Shin, Ho-Seon;Hwang, Young-Soo;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.2
    • /
    • pp.73-79
    • /
    • 2011
  • This paper presents a new impulsive noise detection algorithm in speech signal. The proposed method employs the frequency domain characteristic of the impulsive noise to improve the detection accuracy while avoiding the false-alarm problem by the pitch of the speech signal. Furthermore, we proposed time-frequency domain impulsive noise detector that utilizes both the time and frequency domain parameters which minimizes the false-alarm problem by mutually complementing each other. As the result, the proposed time-frequency domain detector shows the best performance with 99.33 % of detection accuracy and 1.49 % of false-alarm rate.

An Improvement of Speech Hearing Ability for sensorineural impaired listners (감음성(感音性) 난청인의 언어청력 향상에 관한 연구)

  • Lee, S.M.;Woo, H.C.;Kim, D.W.;Song, C.G.;Lee, Y.M.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.05
    • /
    • pp.240-242
    • /
    • 1996
  • In this paper, we proposed a method of a hearing aid suitable for the sensorineural hearing impaired. Generally as the sensorineural hearing impaired have narrow audible ranges between threshold and discomfortable level, the speech spectrum may easily go beyond their audible range. Therefore speech spectrum must be optimally amplified and compressed into the impaired's audible range. The level and frequency of input speech signal are varied continuously. So we have to make compensation input signal for frequency-gain loss of the impaired, specially in the frequency band which includes much information. The input sigaal is divided into short time block and spectrum within the block is calculated. The frequency-gain characteristic is determined using the calculated spectrum. The number of frequency band and the target gain which will be added input signal are estimated. The input signal within the block is processed by a single digital filter with the calculated frequency-gain characteristics. From the results of monosyllabic speech tests to evaluate the performance of the proposed algorithm, the scores of test were improved.

  • PDF

Speech Enhancement Using Blind Signal Separation Combined With Null Beamforming

  • Nam Seung-Hyon;Jr. Rodrigo C. Munoz
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4E
    • /
    • pp.142-147
    • /
    • 2006
  • Blind signal separation is known as a powerful tool for enhancing noisy speech in many real world environments. In this paper, it is demonstrated that the performance of blind signal separation can be further improved by combining with a null beamformer (NBF). Cascading the blind source separation with null beamforming is equivalent to the decomposition of the received signals into the direct parts and reverberant parts. Investigation of beam patterns of the null beamformer and blind signal separation reveals that directional null of NBF reduces mainly direct parts of the unwanted signals whereas blind signal separation reduces reverberant parts. Further, it is shown that the decomposition of received signals can be exploited to solve the local stability problem. Therefore, faster and improved separation can be obtained by removing the direct parts first by null beamforming. Simulation results using real office recordings confirm the expectation.

Adaptive Noise Canceller by Weight Updating Control Method for Speech Enhancement (음성향상을 위한 가중치 갱신제어방식의 적응소음제거기)

  • Kim, Gyu-Dong;Lee, Yun-Jung;Kim, Pil-Un;Chang, Yong-Min;Cho, Jin-Ho;Kim, Myoung-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.1004-1016
    • /
    • 2007
  • In this paper we proposed a Weight-Update-Control Adaptive Noise Canceller which improves speech when environmental noise is stationary and it is hard to acquire a reference signal. Adaptive Noise Canceller(ANC) needs a reference signal, but it is not easy to measure pure noise without voice for reference in factory. Because there are mixed various mechanical noise and workers' voice. Therefore ANC is not suitable to reduce background noise. So we proposed the method that uses an arbitrary constant as an input signal and inputs microphone signal to the reference signal. The noise is eliminated using updated weights in non-speech range. In speech range the weight is fixed and the modified voice is acquired then voice is restored through transversal filter. The proposed method is based on facts that the factory noise is stationary and the noise is not changed in short conversation range. As a result of simulation using MATLAB, we confirmed that the proposed method is effective for reducing factory noise and has high signal to noise ratio(SNR).

  • PDF

Independent Component Analysis Based on Frequency Domain Approach Model for Speech Source Signal Extraction (음원신호 추출을 위한 주파수영역 응용모델에 기초한 독립성분분석)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.807-812
    • /
    • 2020
  • This paper proposes a blind speech source separation algorithm using a microphone to separate only the target speech source signal in an environment in which various speech source signals are mixed. The proposed algorithm is a model of frequency domain representation based on independent component analysis method. Accordingly, for the purpose of verifying the validity of independent component analysis in the frequency domain for two speech sources, the proposed algorithm is executed by changing the type of speech sources to perform speech sources separation to verify the improvement effect. It was clarified from the experimental results by the waveform of this experiment that the two-channel speech source signals can be clearly separated compared to the original waveform. In addition, in this experiments, the proposed algorithm improves the speech source separation performance compared to the existing algorithms, from the experimental results using the target signal to interference energy ratio.

A Training Method for Emotionally Robust Speech Recognition using Frequency Warping (주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.528-533
    • /
    • 2010
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variation on the speech signal and the speech recognition system were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, a training method that cover the speech variations is proposed to develop the emotionally robust speech recognition system. Experimental results from the isolated word recognition using HMM showed that propose method reduced the error rate of the conventional recognition system by 28.4% when emotional test data was used.

Performance Improvement in the Multi-Model Based Speech Recognizer for Continuous Noisy Speech Recognition (연속 잡음 음성 인식을 위한 다 모델 기반 인식기의 성능 향상에 대한 연구)

  • Chung, Yong-Joo
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.55-65
    • /
    • 2008
  • Recently, the multi-model based speech recognizer has been used quite successfully for noisy speech recognition. For the selection of the reference HMM (hidden Markov model) which best matches the noise type and SNR (signal to noise ratio) of the input testing speech, the estimation of the SNR value using the VAD (voice activity detection) algorithm and the classification of the noise type based on the GMM (Gaussian mixture model) have been done separately in the multi-model framework. As the SNR estimation process is vulnerable to errors, we propose an efficient method which can classify simultaneously the SNR values and noise types. The KL (Kullback-Leibler) distance between the single Gaussian distributions for the noise signal during the training and testing is utilized for the classification. The recognition experiments have been done on the Aurora 2 database showing the usefulness of the model compensation method in the multi-model based speech recognizer. We could also see that further performance improvement was achievable by combining the probability density function of the MCT (multi-condition training) with that of the reference HMM compensated by the D-JA (data-driven Jacobian adaptation) in the multi-model based speech recognizer.

  • PDF

Emotion Robust Speech Recognition using Speech Transformation (음성 변환을 사용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.683-687
    • /
    • 2010
  • This paper studied some methods which use frequency warping method that is the one of the speech transformation method to develope the robust speech recognition system for the emotional variation. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions and it is observed that speech spectrum is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, new training method that uses frequency warping in training process is presented to reduce the effect of emotional variation and the speech recognition system based on vocal tract length normalization method is developed to be compared with proposed system. Experimental results from the isolated word recognition using HMM showed that new training method reduced the error rate of the conventional recognition system using speech signal containing various emotions.