• Title/Summary/Keyword: LPCC

Search Result 28, Processing Time 0.03 seconds

Comparison of EEG Feature Vector for Emotion Classification according to Music Listening (음악에 따른 감정분류을 위한 EEG특징벡터 비교)

  • Lee, So-Min;Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.5
    • /
    • pp.696-702
    • /
    • 2014
  • Recently, researches on analyzing relationship between the state of emotion and musical stimuli using EEG are increasing. A selection of feature vectors is very important for the performance of EEG pattern classifiers. This paper proposes a comparison of EEG feature vectors for emotion classification according to music listening. For this, we extract some feature vectors like DAMV, IAV, LPC, LPCC from EEG signals in each class related to music listening and compare a separability of the extracted feature vectors using Bhattacharyya distance. So more effective feature vectors are recommended for emotion classification according to music listening.

A Study on the Features for Building Korean Digit Recognition System Based on Multilayer Perceptron (다층 퍼셉트론에 기반한 한국어 숫자음 인식시스템 구현을 위한 특징 연구)

  • 김인철;김대영
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.4
    • /
    • pp.81-88
    • /
    • 2001
  • In this paper, a Korean digit recognition system based on a multilayer Perceptron is implemented. We also investigate the performance of widely used speech features, such as the Mel-scale filterbank, MFCC, LPCC, and PLP coefficients, by applying them as input of the proposed recognition system. In order to build a robust speech system, the experiments for demonstrating its recognition performance for the clean data as well as corrupt data are carried out. In experiments of recognizing 20 Korean digit, we found that the Mel-scale filterbank coefficients performs best in terms of recognition accuracy for the speech dependent and speech independent database even though noise is considerably added.

  • PDF

Isolated-Word Speech Recognition in Telephone Environment Using Perceptual Auditory Characteristic (인지적 청각 특성을 이용한 고립 단어 전화 음성 인식)

  • Choi, Hyung-Ki;Park, Ki-Young;Kim, Chong-Kyo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.2
    • /
    • pp.60-65
    • /
    • 2002
  • In this paper, we propose GFCC(gammatone filter frequency cepstrum coefficient) parameter which was based on the auditory characteristic for accomplishing better speech recognition rate. And it is performed the experiment of speech recognition for isolated word acquired from telephone network. For the purpose of comparing GFCC parameter with other parameter, the experiment of speech recognition are carried out using MFCC and LPCC parameter. Also, for each parameter, we are implemented CMS(cepstral mean subtraction)which was applied or not in order to compensate channel distortion in telephone network. Accordingly, we found that the recognition rate using GFCC parameter is better than other parameter in the experimental result.

Speaker Verification with the Constraint of Limited Data

  • Kumari, Thyamagondlu Renukamurthy Jayanthi;Jayanna, Haradagere Siddaramaiah
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.807-823
    • /
    • 2018
  • Speaker verification system performance depends on the utterance of each speaker. To verify the speaker, important information has to be captured from the utterance. Nowadays under the constraints of limited data, speaker verification has become a challenging task. The testing and training data are in terms of few seconds in limited data. The feature vectors extracted from single frame size and rate (SFSR) analysis is not sufficient for training and testing speakers in speaker verification. This leads to poor speaker modeling during training and may not provide good decision during testing. The problem is to be resolved by increasing feature vectors of training and testing data to the same duration. For that we are using multiple frame size (MFS), multiple frame rate (MFR), and multiple frame size and rate (MFSR) analysis techniques for speaker verification under limited data condition. These analysis techniques relatively extract more feature vector during training and testing and develop improved modeling and testing for limited data. To demonstrate this we have used mel-frequency cepstral coefficients (MFCC) and linear prediction cepstral coefficients (LPCC) as feature. Gaussian mixture model (GMM) and GMM-universal background model (GMM-UBM) are used for modeling the speaker. The database used is NIST-2003. The experimental results indicate that, improved performance of MFS, MFR, and MFSR analysis radically better compared with SFSR analysis. The experimental results show that LPCC based MFSR analysis perform better compared to other analysis techniques and feature extraction techniques.

A Design of Dangerous Sound Detection Engine of Wearable Device for Hearing Impaired Persons (청각장애인을 위한 웨어러블 기기의 위험소리 검출 엔진 설계)

  • Byun, Sung-Woo;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.7
    • /
    • pp.1263-1269
    • /
    • 2016
  • Hearing impaired persons are exposed to the danger since they can't be aware of many dangerous situations like fire alarms, car hones and so on. Therefore they need haptic or visual informations when they meet dangerous situations. In this paper, we design a dangerous sound detection engine for hearing impaired. We consider four dangerous indoor situations such as a boiled sound of kettle, a fire alarm, a door bell and a phone ringing. For outdoor, two dangerous situations such as a car horn and a siren of emergency vehicle are considered. For a test, 6 data sets are collected from those six situations. we extract LPC, LPCC and MFCC as feature vectors from the collected data and compare the vectors for feasibility. Finally we design a matching engine using an artificial neural network and perform classification tests. We perform classification tests for 3 times considering the use outdoors and indoors. The test result shows the feasibility for the dangerous sound detection.

Discriminative Feature Vector Selection for Emotion Classification Based on Speech. (음성신호기반의 감정분석을 위한 특징벡터 선택)

  • Choi, Ha-Na;Byun, Sung-Woo;Lee, Seok-Pil
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.1391-1392
    • /
    • 2015
  • 최근 컴퓨터 기술이 발전하고, 컴퓨터의 형태가 다양해지면서 여러 wearable device들이 생겨났다. 이에 따라 휴먼 인터페이스 기술에서 사람의 감정정보가 중요해졌고, 감정인식에 대한 연구들이 많이 진행 되어 왔다. 본 논문에서는 감정분석에 적합한 특징벡터를 제시하고자 한다. 이를 위해 사람의 감정을 보통, 기쁨, 슬픔, 화남 4가지로 분류하고 방송매체를 통하여 잡음 없이 녹음하였다. 특징벡터는 MFCC, LPC, LPCC 3가지를 추출하였고 Bhattacharyya거리 측정을 통하여 분리도를 비교하였다.

  • PDF

A study on Effective Feature Parameters Comparison for Speaker Recognition (화자인식에 효과적인 특징벡터에 관한 비교연구)

  • Park TaeSun;Kim Sang-Jin;Kwang Moon;Hahn Minsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.145-148
    • /
    • 2003
  • In this paper, we carried out comparative study about various feature parameters for the effective speaker recognition such as LPC, LPCC, MFCC, Log Area Ratio, Reflection Coefficients, Inverse Sine, and Delta Parameter. We also adopted cepstral liftering and cepstral mean subtraction methods to check their usefulness. Our recognition system is HMM based one with 4 connected-Korean-digit speech database. Various experimental results will help to select the most effective parameter for speaker recognition.

  • PDF

Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients (구개열 환자 발음 판별을 위한 특징 추출 방법 분석)

  • Kim, Sung Min;Kim, Wooil;Kwon, Tack-Kyun;Sung, Myung-Whun;Sung, Mee Young
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1372-1379
    • /
    • 2015
  • This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.

A PCA-based MFDWC Feature Parameter for Speaker Verification System (화자 검증 시스템을 위한 PCA 기반 MFDWC 특징 파라미터)

  • Hahm Seong-Jun;Jung Ho-Youl;Chung Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.36-42
    • /
    • 2006
  • A Principal component analysis (PCA)-based Mel-Frequency Discrete Wavelet Coefficients (MFDWC) feature Parameters for speaker verification system is Presented in this Paper In this method, we used the 1st-eigenvector obtained from PCA to calculate the energy of each node of level that was approximated by. met-scale. This eigenvector satisfies the constraint of general weighting function that the squared sum of each component of weighting function is unity and is considered to represent speaker's characteristic closely because the 1st-eigenvector of each speaker is fairly different from the others. For verification. we used Universal Background Model (UBM) approach that compares claimed speaker s model with UBM on frame-level. We performed experiments to test the effectiveness of PCA-based parameter and found that our Proposed Parameters could obtain improved average Performance of $0.80\%$compared to MFCC. $5.14\%$ to LPCC and 6.69 to existing MFDWC.

Digital Isolated Word Recognition System based on MFCC and DTW Algorithm (MFCC와 DTW에 알고리즘을 기반으로 한 디지털 고립단어 인식 시스템)

  • Zang, Xian;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.290-291
    • /
    • 2008
  • The most popular speech feature used in speech recognition today is the Mel-Frequency Cepstral Coefficients (MFCC) algorithm, which could reflect the perception characteristics of the human ear more accurately than other parameters. This paper adopts MFCC and its first order difference, which could reflect the dynamic character of speech signal, as synthetical parametric representation. Furthermore, we quote Dynamic Time Warping (DTW) algorithm to search match paths in the pattern recognition process. We use the software "GoldWave" to record English digitals in the lab environments and the simulation results indicate the algorithm has higher recognition accuracy than others using LPCC, etc. as character parameters in the experiment for Digital Isolated Word Recognition (DIWR) system.

  • PDF