• Title/Summary/Keyword: speech quality

Search Result 807, Processing Time 0.02 seconds

Minima Controlled Speech Presence Uncertainty Tracking Method for Speech Enhancement (음성 향상을 위한 최소값 제어 음성 존재 부정확성의 추적기법)

  • Lee, Woo-Jung;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.668-673
    • /
    • 2009
  • In this paper, we propose the minima controlled speech presence uncertainty tracking method to improve a speech enhancement. In the conventional tracking speech presence uncertainty, we propose a method for estimating distinct values of the a priori speech absence probability for different frames and channels. This estimation is inherently based on a posteriori SNR and used in estimating the speech absence probability (SAP). In this paper, we propose a novel estimation of distinct values of the a priori speech absence probability, which is based on minima controlled speech presence uncertainty tracking method, for different frames and channels. Subsequently, estimation is applied to the calculation of speech absence probability for speech enhancement. Performance of the proposed enhancement algorithm is evaluated by ITU-T P. 862 perceptual evaluation of speech quality (PESQ) under various noise environments. We show that the proposed algorithm yields better results compared to the conventional tracking speech presence uncertainty.

Analyzing the element of emotion recognition from speech (음성으로부터 감성인식 요소분석)

  • 심귀보;박창현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.510-515
    • /
    • 2001
  • Generally, there are (1)Words for conversation (2)Tone (3)Pitch (4)Formant frequency (5)Speech speed, etc as the element for emotional recognition from speech signal. For human being, it is natural that the tone, vice quality, speed words are easier elements rather than frequency to perceive other s feeling. Therefore, the former things are important elements fro classifying feelings. And, previous methods have mainly used the former thins but using formant is good for implementing as machine. Thus. our final goal of this research is to implement an emotional recognition system based on pitch, formant, speech speed, etc. from speech signal. In this paper, as first stage we foun specific features of feeling angry from his words when a man got angry.

  • PDF

VQ Codebook Index Interpolation Method for Frame Erasure Recovery of CELP Coders in VoIP

  • Lim Jeongseok;Yang Hae Yong;Lee Kyung Hoon;Park Sang Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.877-886
    • /
    • 2005
  • Various frame recovery algorithms have been suggested to overcome the communication quality degradation problem due to Internet-typical impairments on Voice over IP(VoIP) communications. In this paper, we propose a new receiver-based recovery method which is able to enhance recovered speech quality with almost free computational cost and without an additional increment of delay and bandwidth consumption. Most conventional recovery algorithms try to recover the lost or erroneous speech frames by reconstructing missing coefficients or speech signal during speech decoding process. Thus they eventually need to modify the decoder software. The proposed frame recovery algorithm tries to reconstruct the missing frame itself, and does not require the computational burden of modifying the decoder. In the proposed scheme, the Vector Quantization(VQ) codebook indices of the erased frame are directly estimated by referring the pre-computed VQ Codebook Index Interpolation Tables(VCIIT) using the VQ indices from the adjacent(previous and next) frames. We applied the proposed scheme to the ITU-T G.723.1 speech coder and found that it improved reconstructed speech quality and outperforms conventional G.723.1 loss recovery algorithm. Moreover, the suggested simple scheme can be easily applicable to practical VoIP systems because it requires a very small amount of additional computational cost and memory space.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

A Study on Multi-Pulse Speech Coding Method by Using V/S/TSIUVC (V/S/TSIUVC를 이용한 멀티펄스 음성부호화 방식에 관한 연구)

  • Lee See-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.9
    • /
    • pp.1233-1239
    • /
    • 2004
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involved a distortion of speech qualify in case coexist with a voiced and an unvoiced consonants in a frame. This paper present a new multi-pulse coding method by using V/S/TSIUVC switching, individual pitch pulses and TSIUVC approximation-synthesis method in order to restrict a distortion of speech quality. The TSIUVC is extracted by using the zero crossing rate and individual pitch pulse. And the TSIUVC extraction rate was 91% for female voice and 96.2% for male voice respectively. The important thing is that the frequency information of 0.347kHz below and 2.813kHz above can be made with high quality synthesis waveform within TSIUVC. I evaluate the MPC use V/UV and the FBD-MPC use V/S/TSIUVC. As a result, I knew that synthesis speech of the FBD-MPC was better in speech quality than synthesis speech of the MPC.

  • PDF

Real-time Implementation of Variable Transmission Bit Rate Vocoder Improved Speech Quality in SOLA-B Algorithm & G.729A Vocoder Using on the TMS320C5416 (TMS320C5416을 이용한 SOLA-B 알고리즘과 G.729A 보코더의 음질 향상된 가변 전송률 보코더의 실시간 구현)

  • Ham, Myung-Kyu;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.241-250
    • /
    • 2003
  • In this paper, we implemented the vocoder of variable rate by applying the SOLA-B algorithm to the G.729A to the TMS320C5416 in real-time. This method using the SOLA-B algorithm is that it is reduced the duration of the speech in encoding and is played at the speed of normal by extending the duration of the speech in decoding. But the method applied to the existed G.729A and SOLA-B algorithm is caused the loss of speech quality in G.729A which is not reflected about length variation of speech. Therefore the proposed method is encoded according as it is modified the structure of LSP quantization table about the length of speech is reduced by using the SOLA-B algorithm. The vocoder of variable rate by applying the G.729A and SOLA-B algorithm is represented the maximum complexity of 10.2MIPS about encoder and 2.8MIPS about decoder in 8kbps transmission rate. Also it is evaluated 17.3MIPS about encoder, 9.9MIPS about decoder in 6kbps and 18.5MIPS about encoder, 11.1MIPS about decoder in 4kbps according to the transmission rate. The used memory is about program ROM 9.7kwords, table ROM 4.69kwords, RAM 5.2kwords. The waveform of output is showed by the result of C simulator and Bit Exact. Also, the result of MOS test for evaluation of speech quality of the vocoder of variable rate which is implemented in real-time, it is estimated about 3.68 in 4kbps.

  • PDF

Implementation of G.726 ADPCM Dual Rate Speech Codec of 16Kbps and 40Kbps (16Kbps와 40Kbps의 Dual Rate G.726 ADPCM 음성 codec구현)

  • Kim Jae-Oh;Han Kyong-Ho
    • Journal of IKEEE
    • /
    • v.2 no.2 s.3
    • /
    • pp.233-238
    • /
    • 1998
  • In this paper, the implementation of dual rate ADPCM using G.726 16Kbps and 40Kbps speech codec algorithm is handled. For small signals, the low rate 16Kbps coding algorithm shows almost the same SNR as the high rate 40Kbps coding algorithm , while the high rate 40Kbps coding algorithm shows the higher SNR than the low rate 16Kbps coding algorithm fur large signal. To obtain the good trade-off between the data rate and synthesized speech quality, we applied low rate 16Kbps for the small signal and high rate 40Kbps for the large signal. Various threshold values determining the rate are applied for good trade-off between data rate and speech quality. The simulation result shows the good speech quality at a low rate comparing with 16Kbps & 40Kbps.

  • PDF

A Study on the Speech Transmission Index Method for Estimating Articulation of Loudspeaking Telephony (음성전송지수를 이용한 확성전화기의 명료도 평가 방법)

  • Jang, Dae-Young;Kang, Seong-Hoon;Sim, Dong-Yeon;Kim, Chun-Duck
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.32-39
    • /
    • 1994
  • The speech transmission quality in telephone is quantified in terms of loudness rating, but this method has been validated only for the handset telephony. The transmission quality of loudspeaking telephony in any room must be evaluated not only with speech transmission but also with background noise, echo and reverberation since the effect of room acoustics is much stroger for loudspeaking telephoy. Therefore, it requires a better approach to specify the quality of loudspeaking telephony. By calcuating the speech transmission index (STI), a physical method for measuring the quality of speech transmission was proposed by Steeneken. In this paper, the application of a STI method for estimating articulation of loudspeaking telephony was discussed. And the STI measurement system with high speed calculation was also three rooms, having different reverberation times. The result show that the STI decreases as the reverberation time of rooms increases. It suggests that speech transmission index method can be useful evaluating articulation of a loudspeaking telephony including the sound field characteristics.

  • PDF

A Study on the Improvements of Security and Quality for Analog Speech Scrambler (아날로그 음성 비화기의 비도 및 음질 향상에 관한 연구)

  • 공병구;조동호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.27-35
    • /
    • 1993
  • In this paper, a new algorithm for high level security and quality of speech is proposed. The algorithm is based on the rearrangement of the fast fourier transform (FFT) coefficients with pre and post filter process, hamming window and adaptive pseudo spectrum insertion. Then, the pre and post filters are used for the whitening of speech spectrum and the adaptive pseudo spectrum is inserted for the unclassification of silence/speech. Also, the hamming window technique is applied for the robustness to the syncronization error in the telephone line. According to the simulation results, it can be seen that the security of scrambled signal and the quality of descrambled signal have been improved fairly in both subjective and objective performance test and the new FFT scrambler is robust to the synchronization error.

  • PDF

The Efficacy of the Bel canto Singing Technique as a Method of Improving Voice Quality of Vocal Bowing Sulcus Vocalis

  • Yoo, Jae-Yeon;Seo, Dong-Il
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.103-108
    • /
    • 2011
  • The purpose of this study was to investigate the effects of the Bel canto singing technique on voice quality in patients with vocal bowing and sulcus vocalis. Five patients with vocal bowing, and five patients with sulcus vocalis participated in the study. Each subject was assessed acoustically (Jitter, Shimmer, NNE) in the first and last session. Dr. Speech (version 4.0, Tiger-DRS) was used to compare acoustic parameters of pre- and post-treatment. The Bel canto singing technique consisted of breathing exercises, relaxation exercises, and phonation exercises. The results showed that the Bel canto singing technique tended to be effective on improving voice quality in patients with organic voice disorders.

  • PDF