• Title/Summary/Keyword: Unvoiced

Search Result 116, Processing Time 0.025 seconds

Plosive consonants recognition using acoustic properties with the frames representing each phoneme (조음 특성과 음소 대표 구간을 이용한 우리말 파열음의 인식)

  • 박찬응;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.4
    • /
    • pp.33-41
    • /
    • 1997
  • Korean unvoiced phonemes consist of nonstationary parts comparing that the vowels and nasal consonants consist of quasi-stationary part. And some phonemes, which have smae point of articulation but differnt manner of articulation, has similar characteristics, so it makes to be hard to distinguish each other. A new method usin gchanges and characteristics of acoustic properties of these phonemes to improve recognition rate are proposed. And because these changes and cahracteristics evidently occur in continuous speech except some unvoiced consonants are articulated as voiced phoneme in case to be used as an midial between voiced phonemes, this method can be applied easily. The features of the frames extracted to represent each phonemes are used asinputs to the hierarchical neural network. And with these results final decision for phoneme recognition is made thorugh post processing which the new method is applied to. Through the experimental recognition results for 9 unvoiced consonants which belong to bilabial, alveolar, and velar phoneme series, 89.4% recognition rate to distinguish in same phoneme series is obtained, and 85.6% recognition rate is obtained in case of including cistinguishing phoneme series.

  • PDF

A Study on a Searching, Extraction and Approximation-Synthesis of Transition Segment in Continuous Speech (연속음성에서 천이구간의 탐색, 추출, 근사합성에 관한 연구)

  • Lee, Si-U
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.4
    • /
    • pp.1299-1304
    • /
    • 2000
  • In a speed coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech quality in case coexist with a voiced and an unvoiced consonants in a frame. So, I propose TSIUVC(Transition Segment Including UnVoiced Consonant) searching, extraction ad approximation-synthesis method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This method based on a zerocrossing rate and pitch detector using FIR-STREAK Digital Filter. As a result, the extraction rates of TSIUVC are 84.8% (plosive), 94.9%(fricative), 92.3%(affricative) in female voice, and 88%(plosive), 94.9%(fricative), 92.3%(affricative) in male voice respectively, Also, I obain a high quality approximation-synthesis waveforms within TSIUVC by using frequency information of 0.547kHz below and 2.813kHz above. This method has the capability of being applied to speech coding of low bit rate, speech analysis and speech synthesis.

  • PDF

Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • v.43 no.1
    • /
    • pp.82-94
    • /
    • 2021
  • In this work, six voiced/unvoiced speech classifiers based on the autocorrelation function (ACF), average magnitude difference function (AMDF), cepstrum, weighted ACF (WACF), zero crossing rate and energy of the signal (ZCR-E), and neural networks (NNs) have been simulated and implemented in real time using the TMS320C6713 DSP starter kit. These speech classifiers have been integrated into a linear-predictive-coding-based speech analysis-synthesis system and their performance has been compared in terms of the percentage of the voiced/unvoiced classification accuracy, speech quality, and computation time. The results of the percentage of the voiced/unvoiced classification accuracy and speech quality show that the NN-based speech classifier performs better than the ACF-, AMDF-, cepstrum-, WACF- and ZCR-E-based speech classifiers for both clean and noisy environments. The computation time results show that the AMDF-based speech classifier is computationally simple, and thus its computation time is less than that of other speech classifiers, while that of the NN-based speech classifier is greater compared with other classifiers.

An ACLMS-MPC Coding Method Integrated with ACFBD-MPC and LMS-MPC at 8kbps bit rate. (8kbps 비트율을 갖는 ACFBD-MPC와 LMS-MPC를 통합한 ACLMS-MPC 부호화 방식)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • This paper present an 8kbps ACLMS-MPC(Amplitude Compensation and Least Mean Square - Multi Pulse Coding) coding method integrated with ACFBD-MPC(Amplitude Compensation Frequency Band Division - Multi Pulse Coding) and LMS-MPC(Least Mean Square - Multi Pulse Coding) used V/UV/S(Voiced / Unvoiced / Silence) switching, compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. In integrating several methods, it is important to adjust the bit rate of voiced and unvoiced sound source to 8kbps while reducing the distortion of the speech waveform. In adjusting the bit rate of voiced and unvoiced sound source to 8 kbps, the speech waveform can be synthesized efficiently by restoring the individual pitch intervals using multi pulse in the representative interval. I was implemented that the ACLMS-MPC method and evaluate the SNR of APC-LMS in coding condition in 8kbps. As a result, SNR of ACLMS-MPC was 15.0dB for female voice and 14.3dB for male voice respectively. Therefore, I found that ACLMS-MPC was improved by 0.3dB~1.8dB for male voice and 0.3dB~1.6dB for female voice compared to existing MPC, ACFBD-MPC and LMS-MPC. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or internet phone. In the future, I will study the evaluation of the sound quality of 6.9kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source.

On a Detection of the ZCR-Parameter for Higher Formants of Speech Signals (음성신호의 상위 포만트에 대한 ZCR-파라미터 검출에 관한 연구)

  • 유건수
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1992.06a
    • /
    • pp.49-53
    • /
    • 1992
  • In many applications such as speech analysis, speech coding, speech recognition, etc., the voiced-unvoiced decision should be performed correctly for efficient processing. One of the parameters which are used for voice-unvoiced decision is zero-crossing. But the information of higher formants have not represented as the zero-crossing rate for higher formants of speech signals.

  • PDF

Spectral subtraction based on speech state and masking effect

  • 김우일;강선미;고한석
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.599-602
    • /
    • 1998
  • In this paper, a speech enhancement method based on phonemic properties and masking effect is propsoed. It is a modified type of spectral subtraction wherein the spectral sharpening process is exploited in unvoiced state considering the phonemic properties. The masking threshold is used to remove the residual noise. The proposed spectral subtraction shows similar performance as that of the classical spectral subtraction method in view of the SNR. But by the prposed scheme, the unvoiced sound region is shown to exhibit relatively less signal distortion in the enhanced speech.

  • PDF

Improved Excitation Modeling for Low-Rate CELP Speech Coding

  • Kwon, Chul-Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2E
    • /
    • pp.24-30
    • /
    • 1999
  • In this paper, we propose a weighting dependent mixed source model (WD-MSM) coder that is an improved version of a CELP-based mixed source model (C-MSM) coder. The coder classifies speech segments into three types : voiced, unvoiced and mixed. The excitation for a voiced frame is an adaptive source, and the excitation for an unvoiced frame is a stochastic source. The coder has a modified mixed source for a mixed frame. We apply different weighting functions for three classes. Simulation results show that the proposed coder at 4 kbits/s yields very good performance both subjectively and objectively.

  • PDF

A Study on Implementation of Real Time Voiced/Unvoiced/Silence Discrimination System (실시간 유성음 무성음 무음 식별장치의 구성에 관한 연구)

  • Bang, Man Won;Choi, Kap Seok
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.23 no.4
    • /
    • pp.565-570
    • /
    • 1986
  • In this paper, the implementation of a voiced/unvoiced/silence discrimination system is presented. The algorithm is based on the zerocrossing rate and the spectral energy distribution of speech. In measuring zerocrossing rate, a new frequency-to-voltage conversion type interval filter is used. Expermental results show that with the proposed algorithm the effect of impulse noise in voiced intervals can be removed.

  • PDF

Speech Transition Detection and approximate-synthesis Method for Speech Signal Compression and Recovery (음성신호 압축 및 복원을 위한 음성 천이구간 검출과 근사합성 방식)

  • Lee, Kwang-Seok;Kim, Bong-Gi;Kang, Seong-Soo;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.763-767
    • /
    • 2008
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech qualify in case coexist with a voiced and an unvoiced consonants in a frame. So, We proposed TS(Transition Segment) including unvoiced consonant searching and extraction method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This research present a new method of TS approximate-synthesis by using Least Mean Square and frequency band division. As a result, this method obtain a high quality approximation-synthesis waveforms within TS by using frequency information of 0.547kHz below and 2.813kHz above. The important thing is that the maximum error signal can be made with low distortion approximation-synthesis waveform within TS. This method has the capability of being applied to a new speech coding of Voiced/Silence/TS, speech analysis and speech synthesis.

  • PDF

Speech Signal Compression and Recovery Using Transition Detection and Approximate-Synthesis (천이구간 추출 및 근사합성에 의한 음성신호 압축과 복원)

  • Lee, Kwang-Seok;Lee, Byeong-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.413-418
    • /
    • 2009
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech qualify in case coexist with a voiced and an unvoiced consonants in a frame. So, We proposed TS(Transition Segment) including unvoiced consonant searching and extraction method in order to uncoexistent with a voiced and unvoiced consonants in a frame. This research present a new method of TS approximate-synthesis by using Least Mean Square and frequency band division. As a result, this method obtain a high qualify approximation-synthesis waveforms within TS by using frequency information of 0.547kHz below and 2.813kHz above. The important thing is that the maximum error signal can be made with low distortion approximation-synthesis waveform within TS. This method has the capability of being applied to a new speech coding of Voiced/Silence/TS, speech analysis and speech synthesis.