• Title/Summary/Keyword: speech waveform

Search Result 135, Processing Time 0.027 seconds

Time-Domain Quantization and Interpolation of Pitch Cycle Waveform

  • Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.11-16
    • /
    • 2008
  • In this paper, a pitch cycle waveform (PCW) is extracted, quantized, and interpolated in a time domain to synthesize high-quality speech at low bit rates. The pre-alignment technique is proposed for the accurate and efficient PCW extraction, which predicts the current PCW position from the previous PCW position assuming that pitch periods evolve slowly. Since the pitch periods are different frame by frame, the original PCW is converted into the fixed-dimension PCW using the dimension-conversion method, and subsequently quantized by code-excited linear predictive (CELP) coding. The excitation signal for the linear predictive coding (LPC) synthesis filter is generated using the time-domain interpolation and interlink of the quantized PCW's. The coder operates at 4.2 kbit/s and 3.2 kbit/s depending on the pitch period. Informal listening test demonstrates the effectiveness of the proposed coding scheme.

On a Study of the Reduction of Bit Rate by the Preprocessing of PSOLA Coding Technique in the G. 723.1 Vocoder (PSOLA 전처리과정을 이용한 G.723.1 보코더의 전송률 감소에 관한 연구)

  • 장경아;조성현;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.401-404
    • /
    • 2002
  • In general, speech coding methods are classified into the following three categories: the waveform coding, the source coding and the hybrid coding. In this paper, First, the reference waveform is detected after searching the pitch period by NAMDF similarity and similarity between the reference waveform and the waveform each pitch period. It made a decision whether the waveform is compressed with the threshold of similarity. If the waveform is compressed only magnitude and pitch information is transmitted into the input of G.723.1 vocoder. Performing through the G.723.1 vocoder, the waveform is restored with the magnitude and pitch information by PSOLA synthesis method. The result of simulation with proposed algorithm has a 31% reduction of bit rate than the standard 5.3kbps G.723.1 ACELP vocoder.

  • PDF

Independent Component Analysis Based on Frequency Domain Approach Model for Speech Source Signal Extraction (음원신호 추출을 위한 주파수영역 응용모델에 기초한 독립성분분석)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.807-812
    • /
    • 2020
  • This paper proposes a blind speech source separation algorithm using a microphone to separate only the target speech source signal in an environment in which various speech source signals are mixed. The proposed algorithm is a model of frequency domain representation based on independent component analysis method. Accordingly, for the purpose of verifying the validity of independent component analysis in the frequency domain for two speech sources, the proposed algorithm is executed by changing the type of speech sources to perform speech sources separation to verify the improvement effect. It was clarified from the experimental results by the waveform of this experiment that the two-channel speech source signals can be clearly separated compared to the original waveform. In addition, in this experiments, the proposed algorithm improves the speech source separation performance compared to the existing algorithms, from the experimental results using the target signal to interference energy ratio.

A study on the phonemic feature changes according to Korean speech waveform edition (한국어 음성 파형의 편집에 의한 한국어 음운 변화에 관한 연구)

  • Kim, Seon-Il;Hong, Ki-Won;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.60-65
    • /
    • 1994
  • A study on phonemic feature changes is accomplished by human perception of the discrimination of the phonemic feature of Korean edited speech waveform which is partially elimination or exchange. We found that speech waveforms has tarnsitional, stationary. equivalent and critical phonemic parts.

  • PDF

On a Multiband Nonuniform Samping Technique with a Gaussian Noise Codebook for Speech Coding (가우시안 코드북을 갖는 다중대역 비균일 음성 표본화법)

  • Chung, Hyung-Goue;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.110-114
    • /
    • 1997
  • When applying the nonuniform sampling to noisy speech signal, the required data rate increases to be comparable to or more than that by uniform sampling such as PCM. To solve this problem, we have proposed the waveform coding method, multiband nonuniform waveform coding(MNWC), applying the nonuniform sampling to band-separated speech signal[7]. However, the speech quality is deteriorated when it is compared to the uniform sampling method, since the high band is simply modeled as a Gaussian noise with average level. In this paper, as a good method to overcome this drawback, the high band is modeled as one of 16 codewords having different center frequencies. By doing this, with maintaining high speech quality as MOS score of average 3.16, the proposed method achieves 1.5 times higher compression ratio than that of the conventional nonuniform sampling method(CNSM).

  • PDF

A Speech Enhancement Algorithm based on Human Psychoacoustic Property (심리음향 특성을 이용한 음성 향상 알고리즘)

  • Jeon, Yu-Yong;Lee, Sang-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.6
    • /
    • pp.1120-1125
    • /
    • 2010
  • In the speech system, for example hearing aid as well as speech communication, speech quality is degraded by environmental noise. In this study, to enhance the speech quality which is degraded by environmental speech, we proposed an algorithm to reduce the noise and reinforce the speech. The minima controlled recursive averaging (MCRA) algorithm is used to estimate the noise spectrum and spectral weighting factor is used to reduce the noise. And partial masking effect which is one of the human hearing properties is introduced to reinforce the speech. Then we compared the waveform, spectrogram, Perceptual Evaluation of Speech Quality (PESQ) and segmental Signal to Noise Ratio (segSNR) between original speech, noisy speech, noise reduced speech and enhanced speech by proposed method. As a result, enhanced speech by proposed method is reinforced in high frequency which is degraded by noise, and PESQ, segSNR is enhanced. It means that the speech quality is enhanced.

Information Dimensions of Speech Phonemes

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.3
    • /
    • pp.148-155
    • /
    • 1998
  • As an application of dimensional analysis in the theory of chaos and fractals, we studied and estimated the information dimension for various phonemes. By constructing phase-space vectors from the time-series speech signals, we calculated the natural measure and the Shannon's information from the trajectories. The information dimension was finally obtained as the slope of the plot of the information versus space division order. The information dimension showed that it is so sensitive to the waveform and time delay. By averaging over frames for various phonemes, we found the information dimension ranges from 1.2 to 1.4.

  • PDF

A Study on SNR Estimation of Continuous Speech Signal (연속음성신호의 SNR 추정기법에 관한 연구)

  • Song, Young-Hwan;Park, Hyung-Woo;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.383-391
    • /
    • 2009
  • In speech signal processing, speech signal corrupted by noise should be enhanced to improve quality. Usually noise estimation methods need flexibility for variable environment. Noise profile is renewed on silence region to avoid effects of speech properties. So we have to preprocess finding voice region before noise estimation. However, if received signal does not have silence region, we cannot apply that method. In this paper, we proposed SNR estimation method for continuous speech signal. The waveform which is stationary region of voiced speech is very correlated by pitch period. So we can estimate the SNR by correlation of near waveform after dividing a frame for each pitch. For unvoiced speech signal, vocal track characteristic is reflected by noise, so we can estimate SNR by using spectral distance between spectrum of received signal and estimated vocal track. Lastly, energy of speech signal is mostly distributed on voiced region, so we can estimate SNR by the ratio of voiced region energy to unvoiced.

Speech Recognition for the Korean Vowel 'ㅣ' based on Waveform-feature Extraction and Neural-network Learning (파형 특징 추출과 신경망 학습 기반 모음 'ㅣ' 음성 인식)

  • Rho, Wonbin;Lee, Jongwoo;Lee, Jaewon
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.69-76
    • /
    • 2016
  • With the recent increase of the interest in IoT in almost all areas of industry, computing technologies have been increasingly applied in human environments such as houses, buildings, cars, and streets; in these IoT environments, speech recognition is being widely accepted as a means of HCI. The existing server-based speech recognition techniques are typically fast and show quite high recognition rates; however, an internet connection is necessary, and complicated server computing is required because a voice is recognized by units of words that are stored in server databases. This paper, as a successive research results of speech recognition algorithms for the Korean phonemic vowel 'ㅏ', 'ㅓ', suggests an implementation of speech recognition algorithms for the Korean phonemic vowel 'ㅣ'. We observed that almost all of the vocal waveform patterns for 'ㅣ' are unique and different when compared with the patterns of the 'ㅏ' and 'ㅓ' waveforms. In this paper we propose specific waveform patterns for the Korean vowel 'ㅣ' and the corresponding recognition algorithms. We also presents experiment results showing that, by adding neural-network learning to our algorithm, the voice recognition success rate for the vowel 'ㅣ' can be increased. As a result we observed that 90% or more of the vocal expressions of the vowel 'ㅣ' can be successfully recognized when our algorithms are used.

Clinical utility of auditory perceptual assessments in the discrimination of a diplophonic voice (이중음성 판별에 있어 청지각적 평가의 임상적 유용성)

  • Bae, Inho;Kwon, Soonbok
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.75-81
    • /
    • 2018
  • Diplophonia is generally defined as the perception of more than one fundamental frequency component in a voice. Its perceptual aspect has traditionally been used to evaluate diplophonia because the perceptions can be easily evaluated, but there are limitations in the validity of the reliability of the intra- and inter-raters, examination situation, and variation of voice sample. Therefore, the purpose of this study is to confirm the reliability and accuracy of auditory perceptual evaluation by comparing non-invasive indirect assessment methods (sound waveform and EGG analysis), and to identify their usefulness with diplophonia. A total of 28 diplophonic voices and 39 non-periodic voices were assessed. Three raters assessed the diplophonia by performing an auditory perception evaluation and identifying the quasi-periodic perturbations of the acoustic waveform and EGG. Among the three discrimination methods, intra- and inter-rater reliability, sensitivity, specificity, accuracy, positive likelihood ratio, and negative likelihood ratio were examined, and the McNemar test was performed to compare the discriminant agreement. The accuracy of the auditory perceptual evaluation (86.57%) was not significantly different from that of sound waveform acoustic (88.06%), but it was significantly different from that of EGG (83.33%). The reading time (6.02 s) for the auditory perceptual evaluation was significantly different from that for sound waveform analysis (30.15 s) and EGG analysis (16.41 s). In the discrimination of diplophonia, auditory perceptual evaluation has sufficient reliability and accuracy as compared to sound waveform and EGG. Since immediate feedback is possible, auditory perceptual evaluation is more convenient. Therefore, it can continue to be used as a tool to discriminate diplophonia in clinical practice.