• Title/Summary/Keyword: speech parameter

Search Result 373, Processing Time 0.021 seconds

Robust Speech Decoding Using Channel-Adaptive Parameter Estimation.

  • Lee, Yun-Keun;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1E
    • /
    • pp.3-6
    • /
    • 1999
  • In digital mobile communication system, the transmission errors affect the quality of output speech seriously. There are many error concealment techniques using a posteriori probability which provides information about any transmitted parameter. They need knowledge about channel transition probability as well as the 1st order Markov transition probability of codec parameters for estimation of transmitted parameters. However, in applications of mobile communication systems, the channel transition probability varies depending on nonstationary channel characteristics. The mismatch of designed channel transition probability of the estimator to actual channel transition probability degrades the performance of the estimator. In this paper, we proposed a new parameter estimator which adapts to the channel characteristics using short time average of maximum a posteriori probabilities(MAPs). The proposed scheme, when applied to the LSP parameter estimation, performed better than the conventional estimator which do not adapt to the channel characteristics.

  • PDF

Estimation of speech feature vectors and enhancement of speech recognition performance using lip information (입술정보를 이용한 음성 특징 파라미터 추정 및 음성인식 성능향상)

  • Min So-Hee;Kim Jin-Young;Choi Seung-Ho
    • MALSORI
    • /
    • no.44
    • /
    • pp.83-92
    • /
    • 2002
  • Speech recognition performance is severly degraded under noisy envrionments. One approach to cope with this problem is audio-visual speech recognition. In this paper, we discuss the experiment results of bimodal speech recongition based on enhanced speech feature vectors using lip information. We try various kinds of speech features as like linear predicion coefficient, cepstrum, log area ratio and etc for transforming lip information into speech parameters. The experimental results show that the cepstrum parameter is the best feature in the point of reconition rate. Also, we present the desirable weighting values of audio and visual informations depending on signal-to-noiso ratio.

  • PDF

Noise-Robust Speech Detection Using The Coefficient of Variation of Spectrum (스펙트럼의 변동계수를 이용한 잡음에 강인한 음성 구간 검출)

  • Kim Youngmin;Hahn Minsoo
    • MALSORI
    • /
    • no.48
    • /
    • pp.107-116
    • /
    • 2003
  • This paper deals with a new parameter for voice detection which is used for many areas of speech engineering such as speech synthesis, speech recognition and speech coding. CV (Coefficient of Variation) of speech spectrum as well as other feature parameters is used for the detection of speech. CV is calculated only in the specific range of speech spectrum. Average magnitude and spectral magnitude are also employed to improve the performance of detector. From the experimental results the proposed voice detector outperformed the conventional energy-based detector in the sense of error measurements.

  • PDF

Development of Speech Training Aids Using Vocal Tract Profile (조음도를 이용한 발음훈련기기의 개발)

  • 박상희;김동준;이재혁;윤태성
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.2
    • /
    • pp.209-216
    • /
    • 1992
  • Deafs train articulation by observing mouth of a tutor, sensing tactually the motions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech parameter, or display only frequency spectra in histogram of pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system is to be developed and this system makes a subject know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory motions of the vocal organs from speech signal. Next, a vocal tract profile model using LP analysis is made up. And using this model, articulatory motions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

Single-Channel Non-Causal Speech Enhancement to Suppress Reverberation and Background Noise

  • Song, Myung-Suk;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.8
    • /
    • pp.487-506
    • /
    • 2012
  • This paper proposes a speech enhancement algorithm to improve the speech intelligibility by suppressing both reverberation and background noise. The algorithm adopts a non-causal single-channel minimum variance distortionless response (MVDR) filter to exploit an additional information that is included in the noisy-reverberant signals in subsequent frames. The noisy-reverberant signals are decomposed into the parts of the desired signal and the interference that is not correlated to the desired signal. Then, the filter equation is derived based on the MVDR criterion to minimize the residual interference without bringing speech distortion. The estimation of the correlation parameter, which plays an important role to determine the overall performance of the system, is mathematically derived based on the general statistical reverberation model. Furthermore, the practical implementation methods to estimate sub-parameters required to estimate the correlation parameter are developed. The efficiency of the proposed enhancement algorithm is verified by performance evaluation. From the results, the proposed algorithm achieves significant performance improvement in all studied conditions and shows the superiority especially for the severely noisy and strongly reverberant environment.

Speech Intelligibility and Vowel Space Characteristics of Alaryngeal Speech (무후두음성의 말 명료도와 모음 공간 특성)

  • Shim, Hee-Jeong;Jang, Hyo-Ryung;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.17-24
    • /
    • 2013
  • This study is aimed at finding out different types of speech characteristics categorized based on voice rehabilitation techniques used on twenty-six patients (all-male) with total or partial laryngectomees. The speech intelligibility of standard esophageal (SE), tracheoesophageal speech (TE), and electriclarynx (EL) was measured by using the CSL and eleven listeners were instructed to rate the speech on a 5-point scale. The vowel space parameters such as vowel space, VAI, FCR, and F2 ratio were measured by averaging 5 repeats of each vowel (/a/, /e/, /i/, /u/) and the results were put into the parameter formula. The results showed significant statistical differences in speech intelligibility and vowel space between SE and TE. The speech intelligibility and vowel space of TE were higher than those of SE or EL and there was a high correlation between speech intelligibility and some parameters (vowel space, VAI, F2 ratio). The results also showed that TE's speech characteristics were most similar to normal groups comparing with SE and EL, but still very deviant in laryngeal speech. This was due to insufficient airflow intake into the esophagus when producing sounds, and because articulation movement was carried out differently among groups. Therefore, these findings will contribute to establishing a baseline related to speech characteristics in voice rehabilitation for patients with alaryngeal speech.

A Correlation Study among Acoustic Parameters of MDVP, Praat, and Dr. Speech (MDVP와 Praat, Dr. Speech간의 음향학적 측정치에 관한 상관연구)

  • Yoo, Jae-Yeon;Jeong, Ok-Ran;Jang, Tae-Yeoub;Ko, Do-Heung
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.29-36
    • /
    • 2003
  • The purposes of this study was to conduct a correlational analysis among $F_^{0}$, Jitter, Shimmer, and NHR (HNR), and NNE estimated by three speech analysis softwares, MDVP, Praat and Dr. Speech. Thirty females and 15 males with normal voice participated in the study. We used Sound Forge 6.0 to record their voice. MDVP, Praat and Dr. Speech were used to measure the acoustic parameters. The Pearson correlation coefficient was determined through a statistical analysis. The results came out as follows: Firstly, there was a strong correlation between $F_^{0}$ and Shimmer of both instruments. However, there was no correlation between Jitter of both instruments. Secondly, Shimmer showed a stronger correlation with HNR, NHR, and NNE than Jitter. Therefore, Shimmer was considered to be more useful and sensitive parameter to identify dysphonic voice compared to jitter.

  • PDF

Synthetic Speech Quality Improvement By Glottal parameter Interpolation - Preliminary study on open quotient interpolation in the speech corpus - (성대특성 보간에 의한 합성음의 음질향상 - 음성코퍼스 내 개구간 비 보간을 위한 기초연구 -)

  • Bae, Jae-Hyun;Oh, Yung-Hwa
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.63-66
    • /
    • 2005
  • For the Large Corpus based TTS the consistency of the speech corpus is very important. It is because the inconsistency of the speech quality in the corpus may result in a distortion at the concatenation point. And because of this inconsistency, large corpus must be tuned repeatedly One of the reasons for the inconsistency of the speech corpus is the different glottal characteristics of the speech sentence in the corpus. In this paper, we adjusted the glottal characteristics of the speech in the corpus to prevent this distortion. And the experimental results are showed.

  • PDF

A Novel Approach to a Robust A Priori SNR Estimator in Speech Enhancement (음성 향상에서 강인한 새로운 선행 SNR 추정 기법에 관한 연구)

  • Park, Yun-Sik;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.383-388
    • /
    • 2006
  • This Paper presents a novel approach to single channel microphone speech enhancement in noisy environments. Widely used noise reduction techniques based on the spectral subtraction are generally expressed as a spectral gam depending on the signal-to-noise ratio (SNR). The well-known decision-directed(DD) estimator of Ephraim and Malah efficiently reduces musical noise under the background noise conditions, but generates the delay of the a prioiri SNR because the DD weights the speech spectrum component of the Previous frame in the speech signal. Therefore, the noise suppression gain which is affected by the delay of the a priori SNR, which is estimated by the DD matches the previous frame rather than the current one, so after noise suppression. this degrades the noise reduction performance during speech transient periods. We propose a computationally simple but effective speech enhancement technique based on the sigmoid type function for the weight Parameter of the DD. The proposed approach solves the delay problem about the main parameter, the a priori SNR of the DD while maintaining the benefits of the DD. Performances of the proposed enhancement algorithm are evaluated by ITU-T p.862 Perceptual Evaluation of Speech duality (PESQ). the Mean Opinion Score (MOS) and the speech spectrogram under various noise environments and yields better results compared with the fixed weight parameter of the DD.

Korean isolated word recognizer using new time alignment method of speech signal (새로운 시간축 정규화 방법을 이용한 한국어 고립단어 인식기)

  • Nam, Myeong-U;Park, Gyu-Hong;No, Seung-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.567-575
    • /
    • 2001
  • This paper suggests new method to get fixed size parameter from different length of voice signals. The efficiency of speech recognizer is determined by how to compare the similarity(distance of each pattern) of the parameter from voice signal. But the variation of voice signal and the difference of speech speed make it difficult to extract the fixed size parameter from the voice signal. The method suggested in this paper is to normalize the parameter at fixed size by using the 2 dimension DCT(Discrete Cosine Transform) after representing the parameter by spectrogram. To prove validity of the suggested method, parameter extracted from 32 auditory filter-bank(it estimates auditory nerve firing probabilities) is used for the input of neural network after being processed by 2 dimension DCT. And to compare with conventional methods, we used one of conventional methods which solve time alignment problem. The result shows more efficient performance and faster recognition speed in the speaker dependent and independent isolated word recognition than conventional method.

  • PDF