• Title/Summary/Keyword: Mfcc

Search Result 274, Processing Time 0.021 seconds

Channel-attentive MFCC for Improved Recognition of Partially Corrupted Speech (부분 손상된 음성의 인식 향상을 위한 채널집중 MFCC 기법)

  • 조훈영;지상문;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.315-322
    • /
    • 2003
  • We propose a channel-attentive Mel frequency cepstral coefficient (CAMFCC) extraction method to improve the recognition performance of speech that is partially corrupted in the frequency domain. This method introduces weighting terms both at the filter bank analysis step and at the output probability calculation of decoding step. The weights are obtained for each frequency channel of filter bank such that the more reliable channel is emphasized by a higher weight value. Experimental results on TIDIGITS database corrupted by various frequency-selective noises indicated that the proposed CAMFCC method utilizes the uncorrupted speech information well, improving the recognition performance by 11.2% on average in comparison to a multi-band speech recognition system.

Dimensionality Reduction in Speech Recognition by Principal Component Analysis (음성인식에서 주 성분 분석에 의한 차원 저감)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.9
    • /
    • pp.1299-1305
    • /
    • 2013
  • In this paper, we investigate a method of reducing the computational cost in speech recognition by dimensionality reduction of MFCC feature vectors. Eigendecomposition of the feature vectors renders linear transformation of the vectors in such a way that puts the vector components in order of variances. The first component has the largest variance and hence serves as the most important one in relevant pattern classification. Therefore, we might consider a method of reducing the computational cost and achieving no degradation of the recognition performance at the same time by dimensionality reduction through exclusion of the least-variance components. Experimental results show that the MFCC components might be reduced by about half without significant adverse effect on the recognition error rate.

A New Power Spectrum Warping Approach to Speaker Warping (화자 정규화를 위한 새로운 파워 스펙트럼 Warping 방법)

  • 유일수;김동주;노용완;홍광석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.103-111
    • /
    • 2004
  • The method of speaker normalization has been known as the successful method for improving the accuracy of speech recognition at speaker independent speech recognition system. A frequency warping approach is widely used method based on maximum likelihood for speaker normalization. This paper propose a new power spectrum warping approach to making improvement of speaker normalization better than a frequency warping. Th power spectrum warping uses Mel-frequency cepstrum analysis(MFCC) and is a simple mechanism to performing speaker normalization by modifying the power spectrum of Mel filter bank in MFCC. Also, this paper propose the hybrid VTN combined the Power spectrum warping and a frequency warping. Experiment of this paper did a comparative analysis about the recognition performance of the SKKU PBW DB applied each speaker normalization approach on baseline system. The experiment results have shown that a frequency warping is 2.06%, the power spectrum is 3.06%, and hybrid VTN is 4.07% word error rate reduction as of word recognition performance of baseline system.

A Study on Error Correction Using Phoneme Similarity in Post-Processing of Speech Recognition (음성인식 후처리에서 음소 유사율을 이용한 오류보정에 관한 연구)

  • Han, Dong-Jo;Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.3
    • /
    • pp.77-86
    • /
    • 2007
  • Recently, systems based on speech recognition interface such as telematics terminals are being developed. However, many errors still exist in speech recognition and then studies about error correction are actively conducting. This paper proposes an error correction in post-processing of the speech recognition based on features of Korean phoneme. To support this algorithm, we used the phoneme similarity considering features of Korean phoneme. The phoneme similarity, which is utilized in this paper, rams data by mono-phoneme, and uses MFCC and LPC to extract feature in each Korean phoneme. In addition, the phoneme similarity uses a Bhattacharrya distance measure to get the similarity between one phoneme and the other. By using the phoneme similarity, the error of eo-jeol that may not be morphologically analyzed could be corrected. Also, the syllable recovery and morphological analysis are performed again. The results of the experiment show the improvement of 7.5% and 5.3% for each of MFCC and LPC.

  • PDF

Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients (구개열 환자 발음 판별을 위한 특징 추출 방법 분석)

  • Kim, Sung Min;Kim, Wooil;Kwon, Tack-Kyun;Sung, Myung-Whun;Sung, Mee Young
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1372-1379
    • /
    • 2015
  • This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.

Vocabulary Recognition Post-Processing System using Phoneme Similarity Error Correction (음소 유사율 오류 보정을 이용한 어휘 인식 후처리 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.83-90
    • /
    • 2010
  • In vocabulary recognition system has reduce recognition rate unrecognized error cause of similar phoneme recognition and due to provided inaccurate vocabulary. Input of inaccurate vocabulary by feature extraction case of recognition by appear result of unrecognized or similar phoneme recognized. Also can't feature extraction properly when phoneme recognition is similar phoneme recognition. In this paper propose vocabulary recognition post-process error correction system using phoneme likelihood based on phoneme feature. Phoneme likelihood is monophone training phoneme data by find out using MFCC and LPC feature extraction method. Similar phoneme is induced able to recognition of accurate phoneme due to inaccurate vocabulary provided unrecognized reduced error rate. Find out error correction using phoneme likelihood and confidence when vocabulary recognition perform error correction for error proved vocabulary. System performance comparison as a result of recognition improve represent MFCC 7.5%, LPC 5.3% by system using error pattern and system using semantic.

Audio Genre Classification based on Deep Learning using Spectrogram (스펙트로그램을 이용한 딥 러닝 기반의 오디오 장르 분류 기술)

  • Jang, Woo-Jin;Yun, Ho-Won;Shin, Seong-Hyeon;Park, Ho-chong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.90-91
    • /
    • 2016
  • 본 논문에서는 스펙트로그램을 이용한 딥 러닝 기반의 오디오 장르 분류 기술을 제안한다. 기존의 오디오 장르 분류는 대부분 GMM 알고리즘을 이용하고, GMM의 특성에 따라 입력 성분들이 서로 직교한 성질을 갖는 MFCC를 오디오의 특성으로 사용한다. 그러나 딥 러닝을 입력의 성질에 제한이 없으므로 MFCC보다 가공되지 않은 특성을 사용할 수 있고, 이는 오디오의 특성을 더 명확히 표현하기 때문에 효과적인 학습을 할 수 있다. 본 논문에서는 딥 러닝에 효과적인 특성을 구하기 위하여 스펙트로그램(spectrogram)을 사용하여 오디오 특성을 추출하는 방법을 제안한다. 제안한 방법을 사용한면 MFCC를 특성으로 하는 딥 러닝보다 더 높은 인식률을 얻을 수 있다.

  • PDF

Sound Reinforcement Based on Context Awareness for Hearing Impaired (청각장애인을 위한 상황인지기반의 음향강화기술)

  • Choi, Jae-Hun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.109-114
    • /
    • 2011
  • In this paper, we apply a context awareness based on Gaussian mixture model (GMM) to a sound reinforcement for hearing impaired. In our approach, the harmful sound amplified through the sound reinforcement algorithm according to context awareness based on GMM which is constructed as Mel-frequency cepstral coefficients (MFCC) feature vector from sound data. According to the experimental results, the proposed approach is found to be effective in the various acoustic environments.

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Improvement of Speech Reconstructed from MFCC Using GMM (GMM을 이용한 MFCC로부터 복원된 음성의 개선)

  • Choi, Won-Young;Choi, Mu-Yeol;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.53
    • /
    • pp.129-141
    • /
    • 2005
  • The goal of this research is to improve the quality of reconstructed speech in the Distributed Speech Recognition (DSR) system. For the extended DSR, we estimate the variable Maximum Voiced Frequency (MVF) from Mel-Frequency Cepstral Coefficient (MFCC) based on Gaussian Mixture Model (GMM), to implement realistic harmonic plus noise model for the excitation signal. For the standard DSR, we also make the voiced/unvoiced decision from MFCC based on GMM because the pitch information is not available in that case. The perceptual test reveals that speech reconstructed by the proposed method is preferred to the one by the conventional methods.

  • PDF