• Title/Summary/Keyword: MFCC(Mel Frequency Cepstral Coefficients)

Search Result 52, Processing Time 0.017 seconds

Sound Reinforcement Based on Context Awareness for Hearing Impaired (청각장애인을 위한 상황인지기반의 음향강화기술)

  • Choi, Jae-Hun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.109-114
    • /
    • 2011
  • In this paper, we apply a context awareness based on Gaussian mixture model (GMM) to a sound reinforcement for hearing impaired. In our approach, the harmful sound amplified through the sound reinforcement algorithm according to context awareness based on GMM which is constructed as Mel-frequency cepstral coefficients (MFCC) feature vector from sound data. According to the experimental results, the proposed approach is found to be effective in the various acoustic environments.

A New Feature for Speech Segments Extraction with Hidden Markov Models (숨은마코프모형을 이용하는 음성구간 추출을 위한 특징벡터)

  • Hong, Jeong-Woo;Oh, Chang-Hyuck
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.2
    • /
    • pp.293-302
    • /
    • 2008
  • In this paper we propose a new feature, average power, for speech segments extraction with hidden Markov models, which is based on mel frequencies of speech signals. The average power is compared with the mel frequency cepstral coefficients, MFCC, and the power coefficient. To compare performances of three types of features, speech data are collected for words with explosives which are generally known hard to be detected. Experiments show that the average power is more accurate and efficient than MFCC and the power coefficient for speech segments extraction in environments with various levels of noise.

Mel-Frequency Cepstral Coefficients Using Formants-Based Gaussian Distribution Filterbank (포만트 기반의 가우시안 분포를 가지는 필터뱅크를 이용한 멜-주파수 켑스트럴 계수)

  • Son, Young-Woo;Hong, Jae-Keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.370-374
    • /
    • 2006
  • Mel-frequency cepstral coefficients are widely used as the feature for speech recognition. In FMCC extraction process. the spectrum. obtained by Fourier transform of input speech signal is divided by met-frequency bands, and each band energy is extracted for the each frequency band. The coefficients are extracted by the discrete cosine transform of the obtained band energy. In this Paper. we calculate the output energy for each bandpass filter by taking the weighting function when applying met-frequency scaled bandpass filter. The weighting function is Gaussian distributed function whose center is at the formant frequency In the experiments, we can see the comparative performance with the standard MFCC in clean condition. and the better Performance in worse condition by the method proposed here.

Digital Isolated Word Recognition System based on MFCC and DTW Algorithm (MFCC와 DTW에 알고리즘을 기반으로 한 디지털 고립단어 인식 시스템)

  • Zang, Xian;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.290-291
    • /
    • 2008
  • The most popular speech feature used in speech recognition today is the Mel-Frequency Cepstral Coefficients (MFCC) algorithm, which could reflect the perception characteristics of the human ear more accurately than other parameters. This paper adopts MFCC and its first order difference, which could reflect the dynamic character of speech signal, as synthetical parametric representation. Furthermore, we quote Dynamic Time Warping (DTW) algorithm to search match paths in the pattern recognition process. We use the software "GoldWave" to record English digitals in the lab environments and the simulation results indicate the algorithm has higher recognition accuracy than others using LPCC, etc. as character parameters in the experiment for Digital Isolated Word Recognition (DIWR) system.

  • PDF

Discriminative Weight Training for Gender Identification (변별적 가중치 학습을 적용한 성별인식 알고리즘)

  • Kang, Sang-Ick;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.5
    • /
    • pp.252-255
    • /
    • 2008
  • In this paper, we apply a discriminative weight training to a support vector machine (SVM) based gender identification. In our approach, the gender decision rule is expressed as the SVM of optimally weighted mel-frequency cepstral coefficients (MFCC) based on a minimum classification error (MCE) method which is different from the previous works in that different weights are assigned to each MFCC filter bank which is considered more realistic. According to the experimental results, the proposed approach is found to be effective for gender identification using SVM.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Performance Improvement of EMG-Pattern Recognition Using MFCC-HMM-GMM (MFCC-HMM-GMM을 이용한 근전도(EMG)신호 패턴인식의 성능 개선)

  • Choi, Heung-Ho;Kim, Jung-Ho;Kwon, Jang-Woo
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.237-244
    • /
    • 2006
  • This study proposes an approach to the performance improvement of EMG(Electromyogram) pattern recognition. MFCC(Mel-Frequency Cepstral Coefficients)'s approach is molded after the characteristics of the human hearing organ. While it supplies the most typical feature in frequency domain, it should be reorganized to detect the features in EMG signal. And the dynamic aspects of EMG are important for a task, such as a continuous prosthetic control or various time length EMG signal recognition, which have not been successfully mastered by the most approaches. Thus, this paper proposes reorganized MFCC and HMM-GMM, which is adaptable for the dynamic features of the signal. Moreover, it requires an analysis on the most suitable system setting fur EMG pattern recognition. To meet the requirement, this study balanced the recognition-rate against the error-rates produced by the various settings when loaming based on the EMG data for each motion.

Evaluation of Frequency Warping Based Features and Spectro-Temporal Features for Speaker Recognition (화자인식을 위한 주파수 워핑 기반 특징 및 주파수-시간 특징 평가)

  • Choi, Young Ho;Ban, Sung Min;Kim, Kyung-Wha;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.3-10
    • /
    • 2015
  • In this paper, different frequency scales in cepstral feature extraction are evaluated for the text-independent speaker recognition. To this end, mel-frequency cepstral coefficients (MFCCs), linear frequency cepstral coefficients (LFCCs), and bilinear warped frequency cepstral coefficients (BWFCCs) are applied to the speaker recognition experiment. In addition, the spectro-temporal features extracted by the cepstral-time matrix (CTM) are examined as an alternative to the delta and delta-delta features. Experiments on the NIST speaker recognition evaluation (SRE) 2004 task are carried out using the Gaussian mixture model-universal background model (GMM-UBM) method and the joint factor analysis (JFA) method, both based on the ALIZE 3.0 toolkit. Experimental results using both the methods show that BWFCC with appropriate warping factor yields better performance than MFCC and LFCC. It is also shown that the feature set including the spectro-temporal information based on the CTM outperforms the conventional feature set including the delta and delta-delta features.

GMM-Based Gender Identification Employing Group Delay (Group Delay를 이용한 GMM기반의 성별 인식 알고리즘)

  • Lee, Kye-Hwan;Lim, Woo-Hyung;Kim, Nam-Soo;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.243-249
    • /
    • 2007
  • We propose an effective voice-based gender identification using group delay(GD) Generally, features for speech recognition are composed of magnitude information rather than phase information. In our approach, we address a difference between male and female for GD which is a derivative of the Fourier transform phase. Also, we propose a novel way to incorporate the features fusion scheme based on a combination of GD and magnitude information such as mel-frequency cepstral coefficients(MFCC), linear predictive coding (LPC) coefficients, reflection coefficients and formant. The experimental results indicate that GD is effective in discriminating gender and the performance is significantly improved when the proposed feature fusion technique is applied.

CMSBS Extraction Using Periodicity-based Mel Sub-band Spectral Subtraction CMSBS Extraction (신호의 주기성에 따라 변형되는 스펙트럼 차감을 이용한 CMSBS)

  • Lee, Woo-Young;Lee, Sang-Ho;Hong, Jae-Keun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2009.05a
    • /
    • pp.768-771
    • /
    • 2009
  • 현재 음성인식에서 가장 많이 사용하고 있는 특징벡터는 MFCC(Mel-Frequency Cepstral Coefficients)이다. 그러나 MFCC도 잡음이 존재하는 환경에서는 인식 성능이 저하된다. 이러한 MFCC의 단점을 해결하기 위해 mel sub-band 스펙트럼 차감법과 신호대잡음비에 따른 에너지 압축을 이용하는 CMSBS(Compression and Mel Sub-Band Spectral subtraction) 방법을 사용한다. 본 논문에서는 CMSBS 방법 적용 시 음성이 발성되는 구간과 묵음 구간에서 mel sub-band 스펙트럼 차감법이 동일한 조건으로 이루어져 발생하는 중요한 음성정보의 손실을 보완하기 위하여 신호의 주기성을 이용하여 spectral flooring 파라미터를 변형하는 방법을 제안한다. 제안한 방법으로 실험을 한 결과 잡음이 거의 없는 음성신호에 대해서는 기존의 방법과 비슷한 인식률을 가지고, 잡음성분이 많을수록 변형된 mel sub-band 스펙트럼 차감법을 적용한 방법이 인식률에서 보다 높은 성능 향상을 가져왔다.

  • PDF