• Title/Summary/Keyword: Speech spectrum

Search Result 307, Processing Time 0.022 seconds

Formant-broadened CMS Using the Log-spectrum Transformed from the Cepstrum (켑스트럼으로부터 변환된 로그 스펙트럼을 이용한 포먼트 평활화 켑스트럴 평균 차감법)

  • 김유진;정혜경;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.361-373
    • /
    • 2002
  • In this paper, we propose a channel normalization method to improve the performance of CMS (cepstral mean subtraction) which is widely adopted to normalize a channel variation for speech and speaker recognition. CMS which estimates the channel effects by averaging long-term cepstrum has a weak point that the estimated channel is biased by the formants of voiced speech which include a useful speech information. The proposed Formant-broadened Cepstral Mean Subtraction (FBCMS) is based on the facts that the formants can be found easily in log spectrum which is transformed from the cepstrum by fourier transform and the formants correspond to the dominant poles of all-pole model which is usually modeled vocal tract. The FBCMS evaluates only poles to be broadened from the log spectrum without polynomial factorization and makes a formant-broadened cepstrum by broadening the bandwidths of formant poles. We can estimate the channel cepstrum effectively by averaging formant-broadened cepstral coefficients. We performed the experiments to compare FBCMS with CMS, PFCMS using 4 simulated telephone channels. In the experiment of channel estimation, we evaluated the distance cepstrum of real channel from the cepstrum of estimated channel and found that we were able to get the mean cepstrum closer to the channel cepstrum due to an softening the bias of mean cepstrum to speech. In the experiment of text-independent speaker identification, we showed the result that the proposed method was superior than the conventional CMS and comparable to the pole-filtered CMS. Consequently, we showed the proposed method was efficiently able to normalize the channel variation based on the conventional CMS.

Noise Spectral Shaping in Speech Waveform Coding (음성파형 부호화에서의 잡음 SPECTRUM 변형에 관한 연구)

  • 이황수;은종관
    • The Journal of the Acoustical Society of Korea
    • /
    • v.3 no.2
    • /
    • pp.69-90
    • /
    • 1984
  • 본 논문에서는 잡음 spectrum 변형 기능을 가진 APCM, ADPCM 및 ADM 음성 부호기의 성능 에 관해서 연구하였다. 잡은 SPECTRUM 변형방식은 두가지를 고려할 수 있는데, APCM과 ADPCM에 서는 C-massage weighting 된 양자화 잡음을 최소화하는 noise feedback filter를 이용하는 방법을 채택 하고, ADM에서는 in-band의 잡음의 일부를 신호대역의 밖으로 옮기는 방법을 사용하였다. APCM 과 ADPCM 부호기의 성능을 측정하는데는 주파수가 weighting이 된 신호대 잡음비와 segment된 FWSQNR를 사용하였다. 실제음성을 사용한 simulation 결과에 의하면 잡음 spectrum 변형기능을 가진 부호기가 없는 것보다 0.5 내지 3dB 가량 좋은 것으로 나타났다. 이러한 개선은 양적으로 비교적 적은 것이 사실이지만 실제로 음성을 들어보면 음질이 현저히 좋아짐을 알 수 있었다.

  • PDF

Design of a 4kb/s ACELP Codec Using the Generalized AbS Principle (Generalized AbS 구조를 이용한 4kb/s ACELP 음성 부호화기의 설계)

  • 성호상;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.33-38
    • /
    • 1999
  • In this paper, we combine a generalized analysis-by-synthesis (AbS) structure and an algebraic excitation scheme to propose a new 4kb/s speech codec. This codec partly uses the structure of G.729. We design a line spectrum pair (LSP) quantizer, an adaptive codebook, and an excitation codebook to fit the 4 kb/s bit rate. The codec has a 25㎳ algorithmic delay, which corresponds to a 20㎳ frame size and a 5㎳ lookahead. At the bit rates below 4kb/s, most CELP speech codecs using the AbS principle have a drawback that results a rapid degradation of speech quality. To overcome this drawback we use the generalized AbS structure which is efficient for the low bit rate speech codec. LP coefficients are converted to LSP and quantized using a predictive 2-stage VQ. A low complexity algebraic codebook which uses shifting method is used for the fixed codebook excitation, and gains of the adaptive codebook and the fixed codebook are quantized using the VQ. To evaluate the performance of the proposed codec A-B preference tests are done with the fixed rate 8kb/s QCELP. As the result of the test, the performance of the codec is similar to that of the fixed rate 8kb/s QCELP.

  • PDF

Extraction of MFCC feature parameters based on the PCA-optimized filter bank and Korean connected 4-digit telephone speech recognition (PCA-optimized 필터뱅크 기반의 MFCC 특징파라미터 추출 및 한국어 4연숫자 전화음성에 대한 인식실험)

  • 정성윤;김민성;손종목;배건성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.279-283
    • /
    • 2004
  • In general, triangular shape filters are used in the filter bank when we extract MFCC feature parameters from the spectrum of the speech signal. A different approach, which uses specific filter shapes in the filter bank that are optimized to the spectrum of training speech data, is proposed by Lee et al. to improve the recognition rate. A principal component analysis method is used to get the optimized filter coefficients. Using a large amount of 4-digit telephone speech database, in this paper, we get the MFCCs based on the PCA-optimized filter bank and compare the recognition performance with conventional MFCCs and direct weighted filter bank based MFCCs. Experimental results have shown that the MFCC based on the PCA-optimized filter bank give slight improvement in recognition rate compared to the conventional MFCCs but fail to achieve better performance than the MFCCs based on the direct weighted filter bank analysis. Experimental results are discussed with our findings.

Therapeutic Robot Action Design for ASD Children Using Speech Data (음성 정보를 이용한 자폐아 치료용 로봇의 동작 설계)

  • Lee, Jin-Gyu;Lee, Bo-Hee
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1123-1130
    • /
    • 2018
  • A cat robot for the Autism Spectrum Disorders(ASD) treatment was designed and conducted field test. The designed robot had emotion expressing action through interaction by the touch, and performed a reasonable emotional expression based on Artificial Neural Network(ANN). However these operations were difficult to use in the various healing activities. In this paper, we describe a motion design that can be used in a variety of contexts and flexibly reaction with various kinds of situations. As a necessary element, the speech recognition system using the speech data collection method and ANN was suggested and the classification results were analyzed after experiment. This ANN will be improved through collecting various voice data to raise the accuracy in the future and checked the effectiveness through field test.

2.4kbps Speech Coding Algorithm Using the Sinusoidal Model (정현파 모델을 이용한 2.4kbps 음성부호화 알고리즘)

  • 백성기;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.196-204
    • /
    • 2002
  • The Sinusoidal Transform Coding(STC) is a vocoding scheme based on a sinusoidal model of a speech signal. The low bit-rate speech coding based on sinusoidal model is a method that models and synthesizes speech with fundamental frequency and its harmonic elements, spectral envelope and phase in the frequency region. In this paper, we propose the 2.4kbps low-rate speech coding algorithm using the sinusoidal model of a speech signal. In the proposed coder, the pitch frequency is estimated by choosing the frequency that makes least mean squared error between synthetic speech with all spectrum peaks and speech synthesized with chosen frequency and its harmonics. The spectral envelope is estimated using SEEVOC(Spectral Envelope Estimation VOCoder) algorithm and the discrete all-pole model. The phase information is obtained using the time of pitch pulse occurrence, i.e., the onset time, as well as the phase of the vocal tract system. Experimental results show that the synthetic speech preserves both the formant and phase information of the original speech very well. The performance of the coder has been evaluated in terms of the MOS test based on informal listening tests, and it achieved over the MOS score of 3.1.

Feature Parameter Extraction and Speech Recognition Using Matrix Factorization (Matrix Factorization을 이용한 음성 특징 파라미터 추출 및 인식)

  • Lee Kwang-Seok;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1307-1311
    • /
    • 2006
  • In this paper, we propose new speech feature parameter using the Matrix Factorization for appearance part-based features of speech spectrum. The proposed parameter represents effective dimensional reduced data from multi-dimensional feature data through matrix factorization procedure under all of the matrix elements are the non-negative constraint. Reduced feature data presents p art-based features of input data. We verify about usefulness of NMF(Non-Negative Matrix Factorization) algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment results, we confirm that proposed feature parameter is superior to MFCC(Mel-Frequency Cepstral Coefficient) in recognition performance that is used generally.

Cepstral and spectral analysis of voices with adductor spasmodic dysphonia (내전형연축성 발성장애 음성에 대한 켑스트럼과 스펙트럼 분석)

  • Shim, Hee Jeong;Jung, Hun;Lee, Sue Ann;Choi, Byung Heun;Heo, Jeong Hwa;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.73-80
    • /
    • 2016
  • The purpose of this study was to analyze perceptual and spectral/cepstral measurements in patients with adductor spasmodic dysphonia(ADSD). Sixty participants with gender and age matched individuals(30 ADSD and 30 controls) were recorded in reading a sentence and sustained the vowel /a/. Acoustic data were analyzed acoustically by measuring CPP, L/H ratio, mean CPP F0 and CSID, and auditory-perceptual ratings were measured using GRBAS. The main results can be summarized as below: (a) the CSID for the connected speech was significantly higher than for the sustained vowel (b) the G, R and S for the connected speech were significantly higher than for the sustained vowel (c) Spectral/cepstral parameters were significantly correlated with the perceptual parameters, and (d) the ROC analysis showed that the threshold of 13.491 for the CSID achieved a good classification for ADSD, with 86.7% sensitivity and 96.7% specificity. Spectral and cepstral analysis for the connected speech is especially meaningful on cases where perceptual analysis and clinical evaluation alone are insufficient.

Vocal Tract Length Normalization for Speech Recognition (음성인식을 위한 성도 길이 정규화)

  • 지상문
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.7
    • /
    • pp.1380-1386
    • /
    • 2003
  • Speech recognition performance is degraded by the variation in vocal tract length among speakers. In this paper, we have used a vocal tract length normalization method wherein the frequency axis of the short-time spectrum associated with a speaker's speech is scaled to minimize the effects of speaker's vocal tract length on the speech recognition performance In order to normalize vocal tract length, we tried several frequency warping functions such as linear and piece-wise linear function. Variable interval piece-wise linear warping function is proposed to effectively model the variation of frequency axis scale due to the large variation of vocal tract length. Experimental results on TIDIGITS connected digits showed the dramatic reduction of word error rates from 2.15% to 0.53% by the proposed vocal tract normalization.

Discussions on Auditory-Perceptual Evaluation Performed in Patients With Voice Disorders (음성장애 환자에서 시행되는 청지각적 평가에 대한 논의)

  • Lee, Seung Jin
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.32 no.3
    • /
    • pp.109-117
    • /
    • 2021
  • The auditory-perceptual evaluation of speech-language pathologists (SLP) in patients with voice disorders is often regarded as a touchstone in the multi-dimensional voice evaluation procedures and provides important information not available in other assessment modalities. Therefore, it is necessary for the SLPs to conduct a comprehensive and in-depth evaluation of not only voice but also the overall speech production mechanism, and they often encounter various difficulties in the evaluation process. In addition, SLPs should strive to avoid bias during the evaluation process and to maintain a wide and constant spectrum of severity for each parameter of voice quality. Lastly, it is very important for the SLPs to perform a team approach by documenting and delivering important information pertaining to auditory-perceptual characteristics in an appropriate and efficient way through close communication with the laryngologists.