• Title/Summary/Keyword: MFCCs

Search Result 27, Processing Time 0.028 seconds

Extraction of MFCC feature parameters based on the PCA-optimized filter bank and Korean connected 4-digit telephone speech recognition (PCA-optimized 필터뱅크 기반의 MFCC 특징파라미터 추출 및 한국어 4연숫자 전화음성에 대한 인식실험)

  • 정성윤;김민성;손종목;배건성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.279-283
    • /
    • 2004
  • In general, triangular shape filters are used in the filter bank when we extract MFCC feature parameters from the spectrum of the speech signal. A different approach, which uses specific filter shapes in the filter bank that are optimized to the spectrum of training speech data, is proposed by Lee et al. to improve the recognition rate. A principal component analysis method is used to get the optimized filter coefficients. Using a large amount of 4-digit telephone speech database, in this paper, we get the MFCCs based on the PCA-optimized filter bank and compare the recognition performance with conventional MFCCs and direct weighted filter bank based MFCCs. Experimental results have shown that the MFCC based on the PCA-optimized filter bank give slight improvement in recognition rate compared to the conventional MFCCs but fail to achieve better performance than the MFCCs based on the direct weighted filter bank analysis. Experimental results are discussed with our findings.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.5
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.

Detection of Pathological Voice Using Linear Discriminant Analysis

  • Lee, Ji-Yeoun;Jeong, Sang-Bae;Choi, Hong-Shik;Hahn, Min-Soo
    • MALSORI
    • /
    • no.64
    • /
    • pp.77-88
    • /
    • 2007
  • Nowadays, mel-frequency cesptral coefficients (MFCCs) and Gaussian mixture models (GMMs) are used for the pathological voice detection. This paper suggests a method to improve the performance of the pathological/normal voice classification based on the MFCC-based GMM. We analyze the characteristics of the mel frequency-based filterbank energies using the fisher discriminant ratio (FDR). And the feature vectors through the linear discriminant analysis (LDA) transformation of the filterbank energies (FBE) and the MFCCs are implemented. An accuracy is measured by the GMM classifier. This paper shows that the FBE LDA-based GMM is a sufficiently distinct method for the pathological/normal voice classification, with a 96.6% classification performance rate. The proposed method shows better performance than the MFCC-based GMM with noticeable improvement of 54.05% in terms of error reduction.

  • PDF

Histogram Enhancement for Robust Speaker Verification (강인한 화자 확인을 위한 히스토그램 개선 기법)

  • Choi, Jae-Kil;Kwon, Chul-Hong
    • MALSORI
    • /
    • no.63
    • /
    • pp.153-170
    • /
    • 2007
  • It is well known that when there is an acoustic mismatch between the speech obtained during training and testing, the accuracy of speaker verification systems drastically deteriorates. This paper presents the use of MFCCs' histogram enhancement technique in order to improve the robustness of a speaker verification system. The technique transforms the features extracted from speech within an utterance such that their statistics conform to reference distributions. The reference distributions proposed in this paper are uniform distribution and beta distribution. The transformation modifies the contrast of MFCCs' histogram so that the performance of a speaker verification system is improved both in the clean training and testing environment and in the clean training and noisy testing environment.

  • PDF

Discrimination of Pathological Speech Using Hidden Markov Models

  • Wang, Jianglin;Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.7-18
    • /
    • 2006
  • Diagnosis of pathological voice is one of the important issues in biomedical applications of speech technology. This study focuses on the discrimination of voice disorder using HMM (Hidden Markov Model) for automatic detection between normal voice and vocal fold disorder voice. This is a non-intrusive, non-expensive and fully automated method using only a speech sample of the subject. Speech data from normal people and patients were collected. Mel-frequency filter cepstral coefficients (MFCCs) were modeled by HMM classifier. Different states (3 states, 5 states and 7 states), 3 mixtures and left to right HMMs were formed. This method gives an accuracy of 93.8% for train data and 91.7% for test data in the discrimination of normal and vocal fold disorder voice for sustained /a/.

  • PDF

An Input Transformation with MFCCs and CNN Learning Based Robust Bearing Fault Diagnosis Method for Various Working Conditions (MFCCs를 이용한 입력 변환과 CNN 학습에 기반한 운영 환경 변화에 강건한 베어링 결함 진단 방법)

  • Seo, Yangjin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.4
    • /
    • pp.179-188
    • /
    • 2022
  • There have been many successful researches on a bearing fault diagnosis based on Deep Learning, but there is still a critical issue of the data distribution difference between training data and test data from their different working conditions causing performance degradation in applying those methods to the machines in the field. As a solution, a data adaptation method has been proposed and showed a good result, but each and every approach is strictly limited to a specific applying scenario or presupposition, which makes it still difficult to be used as a real-world application. Therefore, in this study, we have proposed a method that, using a data transformation with MFCCs and a simple CNN architecture, can perform a robust diagnosis on a target domain data without an additional learning or tuning on the model generated from a source domain data and conducted an experiment and analysis on the proposed method with the CWRU bearing dataset, which is one of the representative datasests for bearing fault diagnosis. The experimental results showed that our method achieved an equal performance to those of transfer learning based methods and a better performance by at least 15% compared to that of an input transformation based baseline method.

Driver Verification System Using Biometrical GMM Supervector Kernel (생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.3
    • /
    • pp.67-72
    • /
    • 2010
  • This paper presents biometrical driver verification system in car experiment through analysis of speech, and face information. We have used Mel-scale Frequency Cesptral Coefficients (MFCCs) for speaker verification using speech information. For face verification, face region is detected by AdaBoost algorithm and dimension-reduced feature vector is extracted by using principal component analysis only from face region. In this paper, we apply the extracted speech- and face feature vectors to an SVM kernel with Gaussian Mixture Models(GMM) supervector. The experimental results of the proposed approach show a clear improvement compared to a simple GMM or SVM approach.

A Study on Correlation between Sasang Constitution and Speech Features (사상체질과 음성특징과의 상관관계 연구)

  • Kwon, Chul-Hong;Kim, Jong-Yeol;Kim, Keun-Ho;Han, Sung-Man
    • Journal of Haehwa Medicine
    • /
    • v.19 no.2
    • /
    • pp.219-228
    • /
    • 2011
  • Objective : Sasang constitution medicine utilizes voice characteristics to diagnose a person's constitution. In this paper we propose methods to analyze Sasang constitution using speech information technology. That is, this study aims at establishing the relationship between Sasang constitutions and their corresponding voice characteristics by investigating various speech variables. Materials & Methods : Voice recordings of 1,406 speakers are obtained whose constitutions have been already diagnosed by the experts in the fields. A total of 144 speech features obtained from five vowels and a sentence are used. The features include pitch, intensity, formant, bandwidth, MDVP and MFCC related variables for each constitution. We analyze the speech variables and find whether there are statistically significant differences among three constitutions. Results : The main speech variables classifying three constitutions are related to pitch and MFCCs for male, and formant and MFCCs for female. The correct decision rate is 73.7% for male Soeumin, 63.3% for male Soyangin, 57.3% for male Taeumin, 74.0% for female Soeumin, 75.6% for female Soyangin, 94.3% for female Taeumin, and 73.0% on the average. Conclusion : Experimental results show that statistically significant correlation between some speech variables and the constitutions is observed.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.