• Title/Summary/Keyword: cepstral coefficients

Search Result 114, Processing Time 0.022 seconds

Performance Improvement of Speaker Recognition Using Enhanced Feature Extraction in Glottal Flow Signals and Multiple Feature Parameter Combination (Glottal flow 신호에서의 향상된 특징추출 및 다중 특징파라미터 결합을 통한 화자인식 성능 향상)

  • Kang, Jihoon;Kim, Youngil;Jeong, Sangbae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2792-2799
    • /
    • 2015
  • In this paper, we utilize source mel-frequency cepstral coefficients (SMFCCs), skewness, and kurtosis extracted in glottal flow signals to improve speaker recognition performance. Generally, because the high band magnitude response of glottal flow signals is somewhat flat, the SMFCCs are extracted using the response below the predefined cutoff frequency. The extracted SMFCC, skewness, and kurtosis are concatenated with conventional feature parameters. Then, dimensional reduction by the principal component analysis (PCA) and the linear discriminat analysis (LDA) is followed to compare performances with conventional systems under equivalent conditions. The proposed recognition system outperformed the conventional system for large scale speaker recognition experiments. Especially, the performance improvement was more noticeable for small Gaussan mixtures.

Classification of Underwater Transient Signals Using MFCC Feature Vector (MFCC 특징 벡터를 이용한 수중 천이 신호 식별)

  • Lim, Tae-Gyun;Hwang, Chan-Sik;Lee, Hyeong-Uk;Bae, Keun-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8C
    • /
    • pp.675-680
    • /
    • 2007
  • This paper presents a new method for classification of underwater transient signals, which employs frame-based decision with Mel Frequency Cepstral Coefficients(MFCC). The MFCC feature vector is extracted frame-by-frame basis for an input signal that is detected as a transient signal, and Euclidean distances are calculated between this and all MFCC feature. vectors in the reference database. Then each frame of the detected input signal is mapped to the class having minimum Euclidean distance in the reference database. Finally the input signal is classified as the class that has maximum mapping rate in the reference database. Experimental results demonstrate that the proposed method is very promising for classification of underwater transient signals.

Monophthong Recognition Optimizing Muscle Mixing Based on Facial Surface EMG Signals (안면근육 표면근전도 신호기반 근육 조합 최적화를 통한 단모음인식)

  • Lee, Byeong-Hyeon;Ryu, Jae-Hwan;Lee, Mi-Ran;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.143-150
    • /
    • 2016
  • In this paper, we propose Korean monophthong recognition method optimizing muscle mixing based on facial surface EMG signals. We observed that EMG signal patterns and muscle activity may vary according to Korean monophthong pronunciation. We use RMS, VAR, MMAV1, MMAV2 which were shown high recognition accuracy in previous study and Cepstral Coefficients as feature extraction algorithm. And we classify Korean monophthong by QDA(Quadratic Discriminant Analysis) and HMM(Hidden Markov Model). Muscle mixing optimized using input data in training phase, optimized result is applied in recognition phase. Then New data are input, finally Korean monophthong are recognized. Experimental results show that the average recognition accuracy is 85.7% in QDA, 75.1% in HMM.

Whale Sound Reconstruction using MFCC and L2-norm Minimization (MFCC와 L2-norm 최소화를 이용한 고래소리의 재생)

  • Chong, Ui-Pil;Jeon, Seo-Yun;Hong, Jeong-Pil;Jo, Se-Hyung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.4
    • /
    • pp.147-152
    • /
    • 2018
  • Underwater transient signals are complex, variable and nonlinear, resulting in a difficulty in accurate modeling with reference patterns. We analyze one type of underwater transient signals, in the form of whale sounds, using the MFCC(Mel-Frequency Cepstral Constant) and synthesize them from the MFCC and the weighted $L_2$-norm minimization techniques. The whales in this experiments are Humpback whales, Right whales, Blue whales, Gray whales, Minke whales. The 20th MFCC coefficients are extracted from the original signals using the MATLAB programming and reconstructed using the weighted $L_2$-norm minimization with the inverse MFCC. Finally, we could find the optimum weighted factor, 3~4 for reconstruction of whale sounds.

Generating Data and Applying Machine Learning Methods for Music Genre Classification (음악 장르 분류를 위한 데이터 생성 및 머신러닝 적용 방안)

  • Bit-Chan Eom;Dong-Hwi Cho;Choon-Sung Nam
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • This paper aims to enhance the accuracy of music genre classification for music tracks where genre information is not provided, by utilizing machine learning to classify a large amount of music data. The paper proposes collecting and preprocessing data instead of using the commonly employed GTZAN dataset in previous research for genre classification in music. To create a dataset with superior classification performance compared to the GTZAN dataset, we extract specific segments with the highest energy level of the onset. We utilize 57 features as the main characteristics of the music data used for training, including Mel Frequency Cepstral Coefficients (MFCC). We achieved a training accuracy of 85% and a testing accuracy of 71% using the Support Vector Machine (SVM) model to classify into Classical, Jazz, Country, Disco, Soul, Rock, Metal, and Hiphop genres based on preprocessed data.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.5
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.

Telephone Speech Recognition with Data-Driven Selective Temporal Filtering based on Principal Component Analysis

  • Jung Sun Gyun;Son Jong Mok;Bae Keun Sung
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.764-767
    • /
    • 2004
  • The performance of a speech recognition system is generally degraded in telephone environment because of distortions caused by background noise and various channel characteristics. In this paper, data-driven temporal filters are investigated to improve the performance of a specific recognition task such as telephone speech. Three different temporal filtering methods are presented with recognition results for Korean connected-digit telephone speech. Filter coefficients are derived from the cepstral domain feature vectors using the principal component analysis.

  • PDF

Speech Enhancement Using the Adaptive Noise Canceling Technique with a Recursive Time Delay Estimator (재귀적 지연추정기를 갖는 적응잡음제거 기법을 이용한 음성개선)

  • 강해동;배근성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.7
    • /
    • pp.33-41
    • /
    • 1994
  • A single channel adaptive noise canceling (ANC) technique with a recursive time delay estimator (RTDE) is presented for removing effects of additive noise on the speech signal. While the conventional method makes a reference signal for the adaptive filter using the pitch estimated on a frame basis from the input speech, the proposed method makes the reference signal using the delay estimated recursively on a sample-by-sample basis. As the RTDEs, the recursion formulae of autocorrelation function (ACF) and average magnitude difference function (AMDF) are derived. The normalized least mean square (NLMS) and recursive least square (RLS) algorithms are applied for adaptation of filter coefficients. Experimental results with noisy speech demonstrate that the proposed method improves the perceived speech quality as well as the signal-to-noise ratio and cepstral distance when compared with the conventional method.

  • PDF

Applying the Bi-level HMM for Robust Voice-activity Detection

  • Hwang, Yongwon;Jeong, Mun-Ho;Oh, Sang-Rok;Kim, Il-Hwan
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.373-377
    • /
    • 2017
  • This paper presents a voice-activity detection (VAD) method for sound sequences with various SNRs. For real-time VAD applications, it is inadequate to employ a post-processing for the removal of burst clippings from the VAD output decision. To tackle this problem, building on the bi-level hidden Markov model, for which a state layer is inserted into a typical hidden Markov model (HMM), we formulated a robust method for VAD not requiring any additional post-processing. In the method, a forward-inference-ratio test was devised to detect the speech endpoints and Mel-frequency cepstral coefficients (MFCC) were used as the features. Our experiment results show that, regarding different SNRs, the performance of the proposed approach is more outstanding than those of the conventional methods.