• Title/Summary/Keyword: unvoiced sound

Search Result 22, Processing Time 0.025 seconds

A Study on a Method of U/V Decision by Using The LSP Parameter in The Speech Signal (LSP 파라미터를 이용한 음성신호의 성분분리에 관한 연구)

  • 이희원;나덕수;정찬중;배명진
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1107-1110
    • /
    • 1999
  • In speech signal processing, the accurate decision of the voiced/unvoiced sound is important for robust word recognition and analysis and a high coding efficiency. In this paper, we propose the mehod of the voiced/unvoiced decision using the LSP parameter which represents the spectrum characteristics of the speech signal. The voiced sound has many more LSP parameters in low frequency region. To the contrary, the unvoiced sound has many more LSP parameters in high frequency region. That is, the LSP parameter distribution of the voiced sound is different to that of the unvoiced sound. Also, the voiced sound has the minimun value of sequantial intervals of the LSP parameters in low frequency region. The unvoiced sound has it in high frequency region. we decide the voiced/unvoiced sound by using this charateristics. We used the proposed method to some continuous speech and then achieved good performance.

  • PDF

Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • Son, Young-Ho;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech (한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리)

  • Hong, Mun-Ki;Shin, Ji-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

An ACLMS-MPC Coding Method Integrated with ACFBD-MPC and LMS-MPC at 8kbps bit rate. (8kbps 비트율을 갖는 ACFBD-MPC와 LMS-MPC를 통합한 ACLMS-MPC 부호화 방식)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • This paper present an 8kbps ACLMS-MPC(Amplitude Compensation and Least Mean Square - Multi Pulse Coding) coding method integrated with ACFBD-MPC(Amplitude Compensation Frequency Band Division - Multi Pulse Coding) and LMS-MPC(Least Mean Square - Multi Pulse Coding) used V/UV/S(Voiced / Unvoiced / Silence) switching, compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. In integrating several methods, it is important to adjust the bit rate of voiced and unvoiced sound source to 8kbps while reducing the distortion of the speech waveform. In adjusting the bit rate of voiced and unvoiced sound source to 8 kbps, the speech waveform can be synthesized efficiently by restoring the individual pitch intervals using multi pulse in the representative interval. I was implemented that the ACLMS-MPC method and evaluate the SNR of APC-LMS in coding condition in 8kbps. As a result, SNR of ACLMS-MPC was 15.0dB for female voice and 14.3dB for male voice respectively. Therefore, I found that ACLMS-MPC was improved by 0.3dB~1.8dB for male voice and 0.3dB~1.6dB for female voice compared to existing MPC, ACFBD-MPC and LMS-MPC. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or internet phone. In the future, I will study the evaluation of the sound quality of 6.9kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source.

Multi-Pulse Amplitude and Location Estimation by Maximum-Likelihood Estimation in MPE-LPC Speech Synthesis (MPE-LPC음성합성에서 Maximum- Likelihood Estimation에 의한 Multi-Pulse의 크기와 위치 추정)

  • 이기용;최홍섭;안수길
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.9
    • /
    • pp.1436-1443
    • /
    • 1989
  • In this paper, we propose a maximum-likelihood estimation(MLE) method to obtain the location and the amplitude of the pulses in MPE( multi-pulse excitation)-LPC speech synthesis using multi-pulses as excitation source. This MLE method computes the value maximizing the likelihood function with respect to unknown parameters(amplitude and position of the pulses) for the observed data sequence. Thus in the case of overlapped pulses, the method is equivalent to Ozawa's crosscorrelation method, resulting in equal amount of computation and sound quality with the cross-correlation method. We show by computer simulation: the multi-pulses obtained by MLE method are(1) pseudo-periodic in pitch in the case of voicde sound, (2) the pulses are random for unvoiced sound, (3) the pulses change from random to periodic in the interval where the original speech signal changes from unvoiced to voiced. Short time power specta of original speech and syunthesized speech obtained by using multi-pulses as excitation source are quite similar to each other at the formants.

  • PDF

Spectral subtraction based on speech state and masking effect

  • 김우일;강선미;고한석
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.599-602
    • /
    • 1998
  • In this paper, a speech enhancement method based on phonemic properties and masking effect is propsoed. It is a modified type of spectral subtraction wherein the spectral sharpening process is exploited in unvoiced state considering the phonemic properties. The masking threshold is used to remove the residual noise. The proposed spectral subtraction shows similar performance as that of the classical spectral subtraction method in view of the SNR. But by the prposed scheme, the unvoiced sound region is shown to exhibit relatively less signal distortion in the enhanced speech.

  • PDF

Design and Implementation of Korean Tet-to-Speech System (다이폰을 이용한 한국어 문자-음성 변환 시스템의 설계 및 구현)

  • 정준구
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.91-94
    • /
    • 1994
  • This paper is a study on the design and implementation of the Korean Tet-to-Speech system. In this paper, parameter symthesis method is chosen for speech symthesis method and PARCOR coeffient, one of the LPC analysis, is used as acoustic parameter, We use a diphone as synthesis unit, it include a basic naturalness of human speech. Diphone DB is consisted of 1228 PCM files. LPC synthesis method has defect that decline clearness of synthesis speech, during synthesizing unvoiced sound In this paper, we improve clearness of synthesized speech, using residual signal as ecitation signal of unvoiced sound. Besides, to improve a naturalness, we control the prosody of synthesized speech through controlling the energy and pitch pattern. Synthesis system is implemented at PC/486 and use a 70Hz-4.5KHz band pass filter for speech imput/output, amplifier and TMS320c30 DSP board.

  • PDF

Detection and Synthesis of Transition Parts of The Speech Signal

  • Kim, Moo-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.234-239
    • /
    • 2008
  • For the efficient coding and transmission, the speech signal can be classified into three distinctive classes: voiced, unvoiced, and transition classes. At low bit rate coding below 4 kbit/s, conventional sinusoidal transform coders synthesize speech of high quality for the purely voiced and unvoiced classes, whereas not for the transition class. The transition class including plosive sound and abrupt voiced-onset has the lack of periodicity, thus it is often classified and synthesized as the unvoiced class. In this paper, the efficient algorithm for the transition class detection is proposed, which demonstrates superior detection performance not only for clean speech but for noisy speech. For the detected transition frame, phase information is transmitted instead of magnitude information for speech synthesis. From the listening test, it was shown that the proposed algorithm produces better speech quality than the conventional one.

A Study Of The Meaningful Speech Sound Block Classification Based On The Discrete Wavelet Transform (Discrete Wavelet Transform을 이용한 음성 추출에 관한 연구)

  • Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2905-2907
    • /
    • 1999
  • The meaningful speech sound block classification provides very important information in the speech recognition. The following technique of the classification is based on the DWT (discrete wavelet transform), which will provide a more fast algorithm and a useful, compact solution for the pre-processing of speech recognition. The algorithm is implemented to the unvoiced/voiced classification and the denoising.

  • PDF

A Study on PCFBD-MPC in 8kbps (8kbps에 있어서 PCFBD-MPC에 관한 연구)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.17-22
    • /
    • 2017
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform. This is caused by normalization of synthesis speech waveform of voiced in the process of restoration the multi-pulses of representation section. This paper present PCFBD-MPC( Position Compensation Frequency Band Division-Multi Pulse Coding ) used V/UV/S( Voiced / Unvoiced / Silence ) switching, position compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. Also, I was implemented that the PCFBD-MPC( Position Compensation Frequency Band Division-Multi Pulse Coding ) system and evaluate the SNRseg of PCFBD-MPC in coding condition of 8kbps. As a result, SNRseg of PCFBD-MPC was 13.4dB for female voice and 13.8dB for male voice respectively. In the future, I will study the evaluation of the sound quality of 8kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or a smart phone.