• Title/Summary/Keyword: cepstral coefficients

Search Result 113, Processing Time 0.021 seconds

Formant-broadened CMS Using the Log-spectrum Transformed from the Cepstrum (켑스트럼으로부터 변환된 로그 스펙트럼을 이용한 포먼트 평활화 켑스트럴 평균 차감법)

  • 김유진;정혜경;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.361-373
    • /
    • 2002
  • In this paper, we propose a channel normalization method to improve the performance of CMS (cepstral mean subtraction) which is widely adopted to normalize a channel variation for speech and speaker recognition. CMS which estimates the channel effects by averaging long-term cepstrum has a weak point that the estimated channel is biased by the formants of voiced speech which include a useful speech information. The proposed Formant-broadened Cepstral Mean Subtraction (FBCMS) is based on the facts that the formants can be found easily in log spectrum which is transformed from the cepstrum by fourier transform and the formants correspond to the dominant poles of all-pole model which is usually modeled vocal tract. The FBCMS evaluates only poles to be broadened from the log spectrum without polynomial factorization and makes a formant-broadened cepstrum by broadening the bandwidths of formant poles. We can estimate the channel cepstrum effectively by averaging formant-broadened cepstral coefficients. We performed the experiments to compare FBCMS with CMS, PFCMS using 4 simulated telephone channels. In the experiment of channel estimation, we evaluated the distance cepstrum of real channel from the cepstrum of estimated channel and found that we were able to get the mean cepstrum closer to the channel cepstrum due to an softening the bias of mean cepstrum to speech. In the experiment of text-independent speaker identification, we showed the result that the proposed method was superior than the conventional CMS and comparable to the pole-filtered CMS. Consequently, we showed the proposed method was efficiently able to normalize the channel variation based on the conventional CMS.

A study on extraction of the frames representing each phoneme in continuous speech (연속음에서의 각 음소의 대표구간 추출에 관한 연구)

  • 박찬응;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.174-182
    • /
    • 1996
  • In continuous speech recognition system, it is possible to implement the system which can handle unlimited number of words by using limited number of phonetic units such as phonemes. Dividing continuous speech into the string of tems of phonemes prior to recognition process can lower the complexity of the system. But because of the coarticulations between neiboring phonemes, it is very difficult ot extract exactly their boundaries. In this paper, we propose the algorithm ot extract short terms which can represent each phonemes instead of extracting their boundaries. The short terms of lower spectral change and higher spectral chang eare detcted. Then phoneme changes are detected using distance measure with this lower spectral change terms, and hgher spectral change terms are regarded as transition terms or short phoneme terms. Finally lower spectral change terms and the mid-term of higher spectral change terms are regarded s the represent each phonemes. The cepstral coefficients and weighted cepstral distance are used for speech feature and measuring the distance because of less computational complexity, and the speech data used in this experimetn was recoreded at silent and ordinary in-dorr environment. Through the experimental results, the proposed algorithm showed higher performance with less computational complexity comparing with the conventional segmetnation algorithms and it can be applied usefully in phoneme-based continuous speech recognition.

  • PDF

Digital Audio Watermarking in The Cepstrum Domain (켑스트럼 영역에서의 오디오 워터마킹 방법)

  • 이상광;호요성
    • Journal of Broadcast Engineering
    • /
    • v.6 no.1
    • /
    • pp.13-20
    • /
    • 2001
  • In this paper, we propose a new digital audio watermarking scheme In the cepstrum domain. We insert a digital watermark signal Into the cepstral components of the audio signal using a technique analogous to spread spectrum Communications, hiding a narrow band signal in a wade band channel. In our proposed method, we use pseudo-random sequences to watermark the audio signal. The watermark Is then weighted in the cepstrum domain according to the distribution of cepstral coefficients and the frequency masking characteristics of the human auditory system. The proposed watermark embedding scheme minimizes audibility of the watermark signal. and the embedded watermark is robust to mu1tip1e watermarks, MPEG audio ceding and additive noose.

  • PDF

Estimation of Optimal Mixture Number of GMM for Environmental Sounds Recognition (환경음 인식을 위한 GMM의 혼합모델 개수 추정)

  • Han, Da-Jeong;Park, Aa-Ron;Baek, Sung-June
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.2
    • /
    • pp.817-821
    • /
    • 2012
  • In this paper we applied the optimal mixture number estimation technique in GMM(Gaussian mixture model) using BIC(Bayesian information criterion) and MDL(minimum description length) as a model selection criterion for environmental sounds recognition. In the experiment, we extracted 12 MFCC(mel-frequency cepstral coefficients) features from 9 kinds of environmental sounds which amounts to 27747 data and classified them with GMM. As mentioned above, BIC and MDL is applied to estimate the optimal number of mixtures in each environmental sounds class. According to the experimental results, while the recognition performances are maintained, the computational complexity decreases by 17.8% with BIC and 31.7% with MDL. It shows that the computational complexity reduction by BIC and MDL is effective for environmental sounds recognition using GMM.

Improvement of Environmental Sounds Recognition by Post Processing (후처리를 이용한 환경음 인식 성능 개선)

  • Park, Jun-Qyu;Baek, Seong-Joon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.31-39
    • /
    • 2010
  • In this study, we prepared the real environmental sound data sets arising from people's movement comprising 9 different environment types. The environmental sounds are pre-processed with pre-emphasis and Hamming window, then go into the classification experiments with the extracted features using MFCC (Mel-Frequency Cepstral Coefficients). The GMM (Gaussian Mixture Model) classifier without post processing tends to yield abruptly changing classification results since it does not consider the results of the neighboring frames. Hence we proposed the post processing methods which suppress abruptly changing classification results by taking the probability or the rank of the neighboring frames into account. According to the experimental results, the method using the probability of neighboring frames improve the recognition performance by more than 10% when compared with the method without post processing.

Gaussian Mixture Model using Minimum Classification Error for Environmental Sounds Recognition Performance Improvement (Minimum Classification Error 방법 도입을 통한 Gaussian Mixture Model 환경음 인식성능 향상)

  • Han, Da-Jeong;Park, Aa-Ron;Park, Jun-Qyu;Baek, Sung-June
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.497-503
    • /
    • 2011
  • In this paper, we proposed the MCE as a GMM training method to improve the performance of environmental sounds recognition. We model the environmental sounds data with newly defined misclassification function using the log likelihood of the corresponding class and the log likelihood of the rest classes for discriminative training. The model parameters are estimated with the loss function using GPD(generalized probabilistic descent). For recognition performance comparison, we extracted the 12 degrees features using preprocessing and MFCC(mel-frequency cepstral coefficients) of the 9 kinds of environmental sounds and carry out GMM classification experiments. According to the experimental results, MCE training method showed the best performance by an average of 87.06% with 19 mixtures. This result confirmed us that MCE training method could be effectively used as a GMM training method in environmental sounds recognition.

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

Voice-Based Gender Identification Employing Support Vector Machines (음성신호 기반의 성별인식을 위한 Support Vector Machines의 적용)

  • Lee, Kye-Hwan;Kang, Sang-Ick;Kim, Deok-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.75-79
    • /
    • 2007
  • We propose an effective voice-based gender identification method using a support vector machine(SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model(GMM) using the mel frequency cepstral coefficients(MFCC). A novel means of incorporating a features fusion scheme based on a combination of the MFCC and pitch is proposed with the aim of improving the performance of gender identification using the SVM. Experiment results indicate that the gender identification performance using the SVM is significantly better than that of the GMM. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

Korean Digit Recognition Using Cepstrum coefficients and Frequency Sensitive Competitive Learning (Cepstrum 계수와 Frequency Sensitive Competitive Learning 신경회로망을 이용한 한국어 인식.)

  • Lee, Su-Hyuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.329-331
    • /
    • 1994
  • In this paper, we present a speaker-dependent Korean Isolated digit recognition system. At the preprocessing step, LPC cepstral coefficients are extracted from speech signal, and are used as the input of a Frequency Sensitive Competitive Learning(FSCL) neural network. We carried out the postprocessing based on the winning-neuron histogram. Experimetal results Indicate the possibility of commercial auto-dial telephones.

  • PDF

Driver Verification System Using Biometrical GMM Supervector Kernel (생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.3
    • /
    • pp.67-72
    • /
    • 2010
  • This paper presents biometrical driver verification system in car experiment through analysis of speech, and face information. We have used Mel-scale Frequency Cesptral Coefficients (MFCCs) for speaker verification using speech information. For face verification, face region is detected by AdaBoost algorithm and dimension-reduced feature vector is extracted by using principal component analysis only from face region. In this paper, we apply the extracted speech- and face feature vectors to an SVM kernel with Gaussian Mixture Models(GMM) supervector. The experimental results of the proposed approach show a clear improvement compared to a simple GMM or SVM approach.