• Title/Summary/Keyword: Speech Signals

Search Result 497, Processing Time 0.029 seconds

Speech synthesis using acoustic Doppler signal (초음파 도플러 신호를 이용한 음성 합성)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2016
  • In this paper, a method synthesizing speech signal using the 40 kHz ultrasonic signals reflected from the articulatory muscles was introduced and performance was evaluated. When the ultrasound signals are radiated to articulating face, the Doppler effects caused by movements of lips, jaw, and chin observed. The signals that have different frequencies from that of the transmitted signals are found in the received signals. These ADS (Acoustic-Doppler Signals) were used for estimating of the speech parameters in this study. Prior to synthesizing speech signal, a quantitative correlation analysis between ADS and speech signals was carried out on each frequency bin. According to the results, the feasibility of the ADS-based speech synthesis was validated. ADS-to-speech transformation was achieved by the joint Gaussian mixture model-based conversion rules. The experimental results from the 5 subjects showed that filter bank energy and LPC (Linear Predictive Coefficient) cepstrum coefficients are the optimal features for ADS, and speech, respectively. In the subjective evaluation where synthesized speech signals were obtained using the excitation sources extracted from original speech signals, it was confirmed that the ADS-to-speech conversion method yielded 72.2 % average recognition rates.

AM-FM Decomposition and Estimation of Instantaneous Frequency and Instantaneous Amplitude of Speech Signals for Natural Human-robot Interaction (자연스런 인간-로봇 상호작용을 위한 음성 신호의 AM-FM 성분 분해 및 순간 주파수와 순간 진폭의 추정에 관한 연구)

  • Lee, He-Young
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.53-70
    • /
    • 2005
  • A Vowel of speech signals are multicomponent signals composed of AM-FM components whose instantaneous frequency and instantaneous amplitude are time-varying. The changes of emotion states cause the variation of the instantaneous frequencies and the instantaneous amplitudes of AM-FM components. Therefore, it is important to estimate exactly the instantaneous frequencies and the instantaneous amplitudes of AM-FM components for the extraction of key information representing emotion states and changes in speech signals. In tills paper, firstly a method decomposing speech signals into AM - FM components is addressed. Secondly, the fundamental frequency of vowel sound is estimated by the simple method based on the spectrogram. The estimate of the fundamental frequency is used for decomposing speech signals into AM-FM components. Thirdly, an estimation method is suggested for separation of the instantaneous frequencies and the instantaneous amplitudes of the decomposed AM - FM components, based on Hilbert transform and the demodulation property of the extended Fourier transform. The estimates of the instantaneous frequencies and the instantaneous amplitudes can be used for modification of the spectral distribution and smooth connection of two words in the speech synthesis systems based on a corpus.

  • PDF

A Study on the Endpoint Detection by FIR Filtering (FIR filtering에 의한 끝점추출에 관한 연구)

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.81-88
    • /
    • 1999
  • This paper provides a method for speech detection. After first order FIR filtering on the speech signals, we applied the conventional method of endpoint detection which utilizes the energy as the criterion in separating signals from background noise. By FIR filtering, only the Fourier components with large values of [amplitude x frequency] become significant in energy profile. By applying this procedure to the 445-words database constructed from ETRI, we confirmed that the low-amplitude noise and/or the low-frequency noise are separated clearly from the speech signals, thereby enhancing the feasibility of ideal endpoint detections.

  • PDF

A New Speech Enhancement Method Using Adaptive Digital Filter (적응디지털필터를 사용한 음질향상 방법)

  • 임용훈;김완구;차일환;윤대희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.10
    • /
    • pp.35-41
    • /
    • 1993
  • In this paper, a new speech enhancement method for speech signal corrupted by environmental noise is proposed. Two signals are obtained from the microphone and from the accelerometer attached to the neck, respectively. Since two signals are generated from same source signal, both signals are closely correlated. And environmental noise has no effect on the accelerometer signal. The speech enhancement system identifies the optimum linear system between two signals on the basis of the dependence between the signals. The enhanced speech can be obtained by filtering the noise-free accelerometer signal. Since the characteristcs of the speech signal and environmental noise are changing with time, adaptive filtering system has to be used for characterizing the time-varing system. Simulation results show 7dB enhancement with 0dB speech signal level relative to the white noise.

  • PDF

Single-Channel Speech Separation Using Phase Model-Based Soft Mask (위상 모델 기반의 소프트 마스크를 이용한 단일 채널 음성분리)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.141-147
    • /
    • 2010
  • In this paper, we propose a new speech separation algorithm to extract and enhance the target speech signals from mixed speech signals by utilizing both magnitude and phase information. Since the previous statistical modeling algorithms assume that the log power spectrum values of the mixed speech signals are independent in the temporal and frequency domain, discontinuities occur in the resultant separated speech signals. To reduce the discontinuities, we apply a smoothing filter in the time-frequency domain. To further improve speech separation performance, we propose a statistical model based on both magnitude and phase information of speech signals. Experimental results show that the proposed algorithm improve signal-to-interference ratio (SIR) by 1.5 dB compared with the previous magnitude-only algorithms.

Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum (Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류)

  • 배명수;안수길
    • The Journal of the Acoustical Society of Korea
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

Correlation between Physical Fatigue and Speech Signals (육체피로와 음성신호와의 상관관계)

  • Kim, Taehun;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.11-17
    • /
    • 2015
  • This paper deals with the correlation between physical fatigue and speech signals. A treadmill task to increase fatigue and a set of subjective questionnaire for rating tiredness were designed. The results from the questionnaire and the collected bio-signals showed that the designed task imposes physical fatigue. The t-test for two-related-samples between the speech signals and fatigue showed that the parameters statistically significant to fatigue are fundamental frequency, first and second formant frequencies, long term average spectral slope, smoothed pitch perturbation quotient, relative average perturbation, pitch perturbation quotient, cepstral peak prominence, and harmonics to noise ratio. According to the experimental results, it is shown that mouth is opened small and voice is changed to be breathy as the physical fatigue accumulates.

Application of Shape Analysis Techniques for Improved CASA-Based Speech Separation (CASA 기반 음성분리 성능 향상을 위한 형태 분석 기술의 응용)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.65
    • /
    • pp.153-168
    • /
    • 2008
  • We propose a new method to apply shape analysis techniques to a computational auditory scene analysis (CASA)-based speech separation system. The conventional CASA-based speech separation system extracts speech signals from a mixture of speech and noise signals. In the proposed method, we complement the missing speech signals by applying the shape analysis techniques such as labelling and distance function. In the speech separation experiment, the proposed method improves signal-to-noise ratio by 6.6 dB. When the proposed method is used as a front-end of speech recognizers, it improves recognition accuracy by 22% for the speech-shaped stationary noise condition and 7.2% for the two-talker noise condition at the target-to-masker ratio than or equal to -3 dB.

  • PDF

Analysis of Eigenvalues of Covariance Matrices of Speech Signals in Frequency Domain (음성 신호의 주파수 영역에서의 공분산행렬의 고유값 분석)

  • Kim, Seonil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.47-50
    • /
    • 2015
  • Speech Signals consist of signals of consonants and vowels, but the lasting time of vowels is much longer than that of consonants. It can be assumed that the correlations between signal blocks in speech signal is very high. Each speech signal is divided into blocks which have 128 speech data. FFT is applied to each block. Low frequency areas of the results of FFT is taken and Covariance matrix between blocks in a speech signal is extracted and finally eigenvalues of those matrix are obtained. It is studied that what the distribution of eigenvalues of various speech files is. The differences between speech signals and noise signals from cars are also studied.

  • PDF

Analysis of Transient Features in Speech Signal by Estimating the Short-term Energy and Inflection points (변곡점 및 단구간 에너지평가에 의한 음성의 천이구간 특징분석)

  • Choi, I.H.;Jang, S.K.;Cha, T.H.;Choi, U.S.;Kim, C.S.
    • Speech Sciences
    • /
    • v.3
    • /
    • pp.156-166
    • /
    • 1998
  • In this paper, I would like to propose a dividing method by estimating the inflection points and the average magnitude energy in speech signals. The method proposed in this paper gave not only a satisfactory solution for the problems on dividing method by zero-crossing rate, but could estimate the feature of the transient period after dividing the starting point and transient period in speech signals before steady state. In the results of the experiment carried out with monosyllabic speech, it was found that even through speech samples indicated in D.C. level, the staring and ending point of the speech signals were exactly divided by the method. In addition to the results, I could compare with the features, such as the length of transient period, the short term energy, the frequency characteristics, in each speech signal.

  • PDF