• Title/Summary/Keyword: speech waveform

Search Result 135, Processing Time 0.021 seconds

On a Pitch Alteration Method Compensated with the Spectrum for High Quality Speech Synthesis (스펙트럼 보상된 고음질 합성용 피치 변경법)

  • 문효정
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.123-126
    • /
    • 1995
  • The waveform coding are concerned with simply preserving the wave shape of speech signal through a redundancy reduction process. In the case of speech synthesis, the wave form coding with high quality are mainly used to the synthesis by analysis. However, because the parameters of this coding are not classified as either excitation and vocal tract parameters, it is difficult to applying the waveform coding to the synthesis by rule. In this paper, we proposed a new pitch alteration method that can change the pitch period in waveform coding by using scaling the time-axis and compensating the spectrum. This is a time-frequency domain method that is preserved in the phase components of the waveform and that has a little spectrum distortion with 2.5% and less for 50% pitch change.

  • PDF

A Study on 8kbps PC-MPC by Using Position Compensation Method of Multi-Pulse (멀티펄스의 위치보정 방법을 이용한 8kbps PC-MPC에 관한 연구)

  • Lee, See-Woo
    • Journal of Digital Convergence
    • /
    • v.11 no.5
    • /
    • pp.285-290
    • /
    • 2013
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform. This is caused by normalization of synthesis speech waveform of voiced in the process of restoration the multi-pulses of representation section. To solve this problem, this paper present a method of position compensation(PC-MPC) in a multi-pulses each pitch interval in order to reduce distortion of speech waveform. I was confirmed that the method can be synthesized close to the original speech waveform. And I evaluate the MPC and PC-MPC using multi-pulses position compensation method. As a result, $SNR_{seg}$ of PC-MPC was improved 0.4dB for female voice and 0.5dB for male voice respectively. Compared to the MPC, $SNR_{seg}$ of PC-MPC has been improved that I was able to control the distortion of the speech waveform finally. And so, I expect to be able to this method for cellular phone and smart phone using excitation source of low bit rate.

The Pitch Extraction of Voiced Speech by the Comparison Between the Original and the Repeated Segmental Waveform. (원 파형과 임의 반복시킨 파형의 비교에 의한 유성음의 피치검출)

  • Bae, Myung-Jin;Ann, Sou-Guil
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.39-42
    • /
    • 1988
  • In speech signal processing, it is necessary to estimate exactly the pitch. We propose a new algorithm which uses the correlation coefficient between the original and the repeated segmental waveform in the frame as a parameter in the pitch extraction. The correlation coefficient in the frame reflects the periodic component and the transient ratio of the waveform.

  • PDF

A Study on TSIUVC Approximate-Synthesis Method using Least Mean Square (최소 자승법을 이용한 TSIUVC 근사합성법에 관한 연구)

  • Lee, See-Woo
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.223-230
    • /
    • 2002
  • In a speech coding system using excitation source of voiced and unvoiced, it would be involves a distortion of speech waveform in case coexist with a voiced and an unvoiced consonants in a frame. This paper present a new method of TSIUVC (Transition Segment Including Unvoiced Consonant) approximate-synthesis by using Least Mean Square. The TSIUVC extraction is based on a zero crossing rate and IPP (Individual Pitch Pulses) extraction algorithm using residual signal of FIR-STREAK Digital Filter. As a result, This method obtain a high Quality approximation-synthesis waveform by using Least Mean Square. The important thing is that the frequency signals in a maximum error signal can be made with low distortion approximation-synthesis waveform. This method has the capability of being applied to a new speech coding of Voiced/Silence/TSIUVC, speech analysis and speech synthesis.

A Study on Speech Signal Processing of TSIUVC using Least Mean Square (LMS를 이용한 TSIUVC의 음성신호처리에 관한 연구)

  • Lee, See-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.6
    • /
    • pp.1175-1179
    • /
    • 2006
  • In a speech coding system using excitation source of voiced and unvoiced, it would be a distortion of speech waveform in case of exist a voiced and an unvoiced consonants in a frame. In this paper, I propose a new method of TSIUVC(Transition Segment Including Unvoiced Consonant) approximate-synthesis by using Least Mean Square. As a result, a method by using Least Mean Square was obtained a high quality approximation-synthesis waveform . The important thing is that the frequency signals in a maximum error signal can be made with low distortion approximation-synthesis waveform. This method has the capability of being applied to a new speech coding of Voiced/Silence/TSIUVC, speech analysis and synthesis.

  • PDF

A Study on Compensation of Amplitude in Multi Pulse (멀티펄스의 진폭보정에 관한 연구)

  • Lee, See-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.9
    • /
    • pp.4119-4124
    • /
    • 2011
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform in case of increasing or decreasing of speech signal amplitude in a frame. This is caused by normalization of synthesis speech signal in the process of restoration the multi-pulses of representation section. To solve this problem, this paper present a method of amplitude compensation(AC-MPC) in a multi-pulses each pitch interval in order to reduce distortion of speech waveform. I was confirmed that the method can be synthesized close to the original speech waveform. And I evaluate the MPC and AC-MPC using amplitude compensation method. As a result, SNRseg of AC-MPC was improved 0.7dB for female voice and 0.7dB for male voice respectively. Compared to the MPC, SNRseg of AC-MPC has been improved that I was able to control the distortion of the speech waveform finally. And so, I expect to be able to this method for cellular phone and smart phone using excitation source of low bit rate.

Enhanced source controlled variable bit-rate scheme in a waveform interpolation coder (Source controlled variable bit-rate scheme을 이용한 파형 보간 부호화기의 음질 개선 기법)

  • Cho, Keun-Seok;Yang, Hee-Sik;Jeong, Sang-Bae;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.315-318
    • /
    • 2007
  • This paper proposes the methods to enhance the speech quality of source controlled variable bit-rate coder based on the waveform interpolation. The methods are to estimate and generate the parameters that are not transmitted from encoder to decoder by the repetition and extrapolation schemes. For the performance evaluation, the PESQ(Perceptual Evaluation of Speech Quality) scores are measured. The experimental results shows that our proposed method outperforms the conventional source controlled variable bit-rate coder. Especially, the performance of the extrapolation method is better than that of the repetition method.

  • PDF

Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • Son, Young-Ho;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

A Spectral Smoothing Algorithm for Unit Concatenating Speech Synthesis (코퍼스 기반 음성합성기를 위한 합성단위 경계 스펙트럼 평탄화 알고리즘)

  • Kim Sang-Jin;Jang Kyung Ae;Hahn Minsoo
    • MALSORI
    • /
    • no.56
    • /
    • pp.225-235
    • /
    • 2005
  • Speech unit concatenation with a large database is presently the most popular method for speech synthesis. In this approach, the mismatches at the unit boundaries are unavoidable and become one of the reasons for quality degradation. This paper proposes an algorithm to reduce undesired discontinuities between the subsequent units. Optimal matching points are calculated in two steps. Firstly, the fullback-Leibler distance measurement is utilized for the spectral matching, then the unit sliding and the overlap windowing are used for the waveform matching. The proposed algorithm is implemented for the corpus-based unit concatenating Korean text-to-speech system that has an automatically labeled database. Experimental results show that our algorithm is fairly better than the raw concatenation or the overlap smoothing method.

  • PDF

Speech Recognition of the Korean Vowel 'ㅗ' Based on Time Domain Waveform Patterns (시간 영역 파형 패턴에 기반한 한국어 모음 'ㅗ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.583-590
    • /
    • 2016
  • Recently, the rapidly increasing interest in IoT in almost all areas of casual human life has led to wide acceptance of speech recognition as a means of HCI. Simultaneously, the demand for speech recognition systems for mobile environments is increasing rapidly. The server-based speech recognition systems are typically fast and show high recognition rates; however, an internet connection is necessary, and complicated server computation is required since a voice is recognized by units of words that are stored in server databases. In this paper, we present a novel method for recognizing the Korean vowel 'ㅗ', as a part of a phoneme based Korean speech recognition system. The proposed method involves analyses of waveform patterns in the time domain instead of the frequency domain, with consequent reduction in computational cost. Elementary algorithms for detecting typical waveform patterns of 'ㅗ' are presented and combined to make final decisions. The experimental results show that the proposed method can achieve 89.9% recognition accuracy.