• Title/Summary/Keyword: MFCC

Search Result 271, Processing Time 0.047 seconds

Generating Speech feature vectors for Effective Emotional Recognition (효과적인 감정인식을 위한 음성 특징 벡터 생성)

  • Sim, In-woo;Han, Eui Hwan;Cha, Hyung Tai
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.687-690
    • /
    • 2019
  • 본 논문에서는 효과적인 감정인식을 위한 효과적인 특징 벡터를 생성한다. 이를 위해서 음성 데이터 셋 RAVDESS를 이용하였으며, 그 중 neutral, calm, happy, sad 총 4가지 감정을 나타내는 음성 신호를 사용하였다. 본 논문에서는 기존에 감정인식에 사용되는 MFCC1~13차 계수와 pitch, ZCR, peakenergy 중에서 효과적인 특징을 추출하기 위해 클래스 간, 클래스 내 분산의 비를 이용하였다. 실험결과 감정인식에 사용되는 특징 벡터들 중 peakenergy, pitch, MFCC2, MFCC3, MFCC4, MFCC12, MFCC13이 효과적임을 확인하였다.

Classification of Underwater Transient Signals Using MFCC Feature Vector (MFCC 특징 벡터를 이용한 수중 천이 신호 식별)

  • Lim, Tae-Gyun;Hwang, Chan-Sik;Lee, Hyeong-Uk;Bae, Keun-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8C
    • /
    • pp.675-680
    • /
    • 2007
  • This paper presents a new method for classification of underwater transient signals, which employs frame-based decision with Mel Frequency Cepstral Coefficients(MFCC). The MFCC feature vector is extracted frame-by-frame basis for an input signal that is detected as a transient signal, and Euclidean distances are calculated between this and all MFCC feature. vectors in the reference database. Then each frame of the detected input signal is mapped to the class having minimum Euclidean distance in the reference database. Finally the input signal is classified as the class that has maximum mapping rate in the reference database. Experimental results demonstrate that the proposed method is very promising for classification of underwater transient signals.

Performance Comparison of Korean Dialect Classification Models Based on Acoustic Features

  • Kim, Young Kook;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.37-43
    • /
    • 2021
  • Using the acoustic features of speech, important social and linguistic information about the speaker can be obtained, and one of the key features is the dialect. A speaker's use of a dialect is a major barrier to interaction with a computer. Dialects can be distinguished at various levels such as phonemes, syllables, words, phrases, and sentences, but it is difficult to distinguish dialects by identifying them one by one. Therefore, in this paper, we propose a lightweight Korean dialect classification model using only MFCC among the features of speech data. We study the optimal method to utilize MFCC features through Korean conversational voice data, and compare the classification performance of five Korean dialects in Gyeonggi/Seoul, Gangwon, Chungcheong, Jeolla, and Gyeongsang in eight machine learning and deep learning classification models. The performance of most classification models was improved by normalizing the MFCC, and the accuracy was improved by 1.07% and F1-score by 2.04% compared to the best performance of the classification model before normalizing the MFCC.

Detection and Classification for Low-altitude Micro Drone with MFCC and CNN (MFCC와 CNN을 이용한 저고도 초소형 무인기 탐지 및 분류에 대한 연구)

  • Shin, Kyeongsik;Yoo, Sinwoo;Oh, Hyukjun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.364-370
    • /
    • 2020
  • This paper is related to detection and classification for micro-sized aircraft that flies at low-altitude. The deep-learning based method using sounds coming from the micro-sized aircraft is proposed to detect and identify them efficiently. We use MFCC as sound features and CNN as a detector and classifier. We've proved that each micro-drones have their own distinguishable MFCC feature and confirmed that we can apply CNN as a detector and classifier even though drone sound has time-related sequence. Typically many papers deal with RNN for time-related features, but we prove that if the number of frame in the MFCC features are enough to contain the time-related information, we can classify those features with CNN. With this approach, we've achieved high detection and classification ratio with low-computation power at the same time using the data set which consists of four different drone sounds. So, this paper presents the simple and effecive method of detection and classification method for micro-sized aircraft.

Implementation of Speaker Independent Speech Recognition System Using Independent Component Analysis based on DSP (독립성분분석을 이용한 DSP 기반의 화자 독립 음성 인식 시스템의 구현)

  • 김창근;박진영;박정원;이광석;허강인
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.359-364
    • /
    • 2004
  • In this paper, we implemented real-time speaker undependent speech recognizer that is robust in noise environment using DSP(Digital Signal Processor). Implemented system is composed of TMS320C32 that is floating-point DSP of Texas Instrument Inc. and CODEC for real-time speech input. Speech feature parameter of the speech recognizer used robust feature parameter in noise environment that is transformed feature space of MFCC(met frequency cepstral coefficient) using ICA(Independent Component Analysis) on behalf of MFCC. In recognition result in noise environment, we hew that recognition performance of ICA feature parameter is superior than that of MFCC.

Study on the Performance of Spectral Contrast MFCC for Musical Genre Classification (스펙트럼 대비 MFCC 특징의 음악 장르 분류 성능 분석)

  • Seo, Jin-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.265-269
    • /
    • 2010
  • This paper proposes a novel spectral audio feature, spectral contrast MFCC (SCMFCC), and studies its performance on the musical genre classification. For a successful musical genre classifier, extracting features that allow direct access to the relevant genre-specific information is crucial. In this regard, the features based on the spectral contrast, which represents the relative distribution of the harmonic and non-harmonic components, have received increased attention. The proposed SCMFCC feature utilizes the spectral contrst on the mel-frequency cepstrum and thus conforms the conventional MFCC in a way more relevant for musical genre classification. By performing classification test on the widely used music DB, we compare the performance of the proposed feature with that of the previous ones.

Classification of Consonants by SOM and LVQ (SOM과 LVQ에 의한 자음의 분류)

  • Lee, Chai-Bong;Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.1
    • /
    • pp.34-42
    • /
    • 2011
  • In an effort to the practical realization of phonetic typewriter, we concentrate on the classification of consonants in this paper. Since many of consonants do not show periodic behavior in time domain and thus the validity for Fourier analysis of them are not convincing, vector quantization (VQ) via LBG clustering is first performed to check if the feature vectors of MFCC and LPCC are ever meaningful for consonants. Experimental results of VQ showed that it's not easy to draw a clear-cut conclusion as to the validity of Fourier analysis for consonants. For classification purpose, two kinds of neural networks are employed in our study: self organizing map (SOM) and learning vector quantization (LVQ). Results from SOM revealed that some pairs of phonemes are not resolved. Though LVQ is free from this difficulty inherently, the classification accuracy was found to be low. This suggests that, as long as consonant classification by LVQ is concerned, other types of feature vectors than MFCC should be deployed in parallel. However, the combination of MFCC/LVQ was not found to be inferior to the classification of phonemes by language-moded based approach. In all of our work, LPCC worked worse than MFCC.

Music Identification Using Pitch Histogram and MFCC-VQ Dynamic Pattern (피치 히스토그램과 MFCC-VQ 동적 패턴을 사용한 음악 검색)

  • Park Chuleui;Park Mansoo;Kim Sungtak;Kim Hoirin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.178-185
    • /
    • 2005
  • This paper presents a new music identification method using probabilistic and dynamic characteristics of melody. The propo3ed method uses pitch and MFCC parameters as feature vectors for the characteristics of music notes and represents melody pattern by pitch histogram and temporal sequence of codeword indices. We also propose a new pattern matching method for the hybrid method. We have tested the proposed algorithm in small (drama OST) and broad (1.005 popular songs) search spaces. The experimental results on search areas of OST and 1,005 popular songs showed better performance of the proposed method over conventional methods. We achieved the performance improvement of average $9.9\%$ and $10.2\%$ in error reduction rate on each search area.

Representation of MFCC Feature Based on Linlog Function for Robust Speech Recognition (강인한 음성 인식을 위한 선형 로그 함수 기반의 MFCC 특징 표현 연구)

  • Yun, Young-Sun
    • MALSORI
    • /
    • no.59
    • /
    • pp.13-25
    • /
    • 2006
  • In previous study, the linlog(linear log) RASTA(J-RASTA) approach based on PLP was proposed to deal with both the channel effect and the additive noise. The extraction of PLP required generally more steps and computation than the extraction of widely used MFCC. Thus, in this paper, we apply the linlog function to the MFCC for investigating the possibility of simple compensation method that removes both distortion. With the experimental results, the proposed method shows the similar tendency to the linlog RASTA-PLP_ When the J value is set to le-6, the best ERR(Error Reduction Rate) of 33% is obtained. For applying the linlog function to the feature extraction process, the J value plays a very important role in compensating the corruption. Thus, the study for the adaptive J or noise dependent J estimation is further required.

  • PDF

A Study on Hazardous Sound Detection Robust to Background Sound and Noise (배경음 및 잡음에 강인한 위험 소리 탐지에 관한 연구)

  • Ha, Taemin;Kang, Sanghoon;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1606-1613
    • /
    • 2021
  • Recently various attempts to control hardware through integration of sensors and artificial intelligence have been made. This paper proposes a smart hazardous sound detection at home. Previous sound recognition methods have problems due to the processing of background sounds and the low recognition accuracy of high-frequency sounds. To get around these problems, a new MFCC(Mel-Frequency Cepstral Coefficient) algorithm using Wiener filter, modified filterbank is proposed. Experiments for comparing the performance of the proposed method and the original MFCC were conducted. For the classification of feature vectors extracted using the proposed MFCC, DNN(Deep Neural Network) was used. Experimental results showed the superiority of the modified MFCC in comparison to the conventional MFCC in terms of 1% higher training accuracy and 6.6% higher recognition rate.