• Title/Summary/Keyword: cepstrum

Search Result 274, Processing Time 0.023 seconds

Parameters Comparison in the speaker Identification under the Noisy Environments (화자식별을 위한 파라미터의 잡음환경에서의 성능비교)

  • Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.185-195
    • /
    • 2000
  • This paper seeks to compare the feature parameters used in speaker identification systems under noisy environments. The feature parameters compared are LP cepstrum (LPCC), Cepstral mean subtraction(CMS), Pole-filtered CMS(PFCMS), Adaptive component weighted cepstrum(ACW) and Postfilter cepstrum(PF). The GMM-based text independent speaker identification system is designed for this target. Some series of experiments show that the LPCC parameter is adequate for modelling the speaker in the matched environments between train and test stages. But in the mismatched training and testing conditions, modified parameters are preferable the LPCC. Especially CMS and PFCMS parameters are more effective for the microphone mismatching conditions while the ACW and PF parameters are good for more noisy mismatches.

  • PDF

EMG signal identification using LPC cepstrum coefficients (LPC cepstrum 계수를 이용한 근전도 신호의 동작판별)

  • Chung, T.Y.;Park, S.H.;Kim, H.R.;Wang, M.S.;Choi, Y.H.;Byun, Y.S.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.738-741
    • /
    • 1988
  • In this paper, we deal with the movements identification of EMG signals by LPC cepstrum coefficients. Movements were identified by extration of characteristics of similar patterns in Euclid distance measurement method for EMG signals generated by voluntary contractions of subject's musculature. As number of coefficients is larger, we obtain the better rate of movements identification. By exact extraction of signals and decision of optimal coefficient, it is expected that these results will apply to prosthesis control in real-time.

  • PDF

Impulsive Source Localization in Noise (잡음 속에 묻힌 임펄스 소음원 위치 추정)

  • Kim Yang-Hann;Choi Young-Chul
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.9 s.90
    • /
    • pp.877-883
    • /
    • 2004
  • This paper addresses the way in which we can find where impulsive noise sources are. Specifically, we have an interest in the case that the signal is embedded in noise. We propose a signal processing method that can identify impulsive sources' location. The method is robust with respect to spatially distributed noise. This has been achieved by the modified beamforming method with regard to cepstrum domain is used. It is noteworthy that the cepstrum has the ability to detect periodic pulse signal in noise. Numerical simulation and experiments are performed to verify the method. Results show that the proposed technique is quite powerful for localizing the faults in noisy environments. The method also required less microphones than conventional beamforming method.

Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • v.43 no.1
    • /
    • pp.82-94
    • /
    • 2021
  • In this work, six voiced/unvoiced speech classifiers based on the autocorrelation function (ACF), average magnitude difference function (AMDF), cepstrum, weighted ACF (WACF), zero crossing rate and energy of the signal (ZCR-E), and neural networks (NNs) have been simulated and implemented in real time using the TMS320C6713 DSP starter kit. These speech classifiers have been integrated into a linear-predictive-coding-based speech analysis-synthesis system and their performance has been compared in terms of the percentage of the voiced/unvoiced classification accuracy, speech quality, and computation time. The results of the percentage of the voiced/unvoiced classification accuracy and speech quality show that the NN-based speech classifier performs better than the ACF-, AMDF-, cepstrum-, WACF- and ZCR-E-based speech classifiers for both clean and noisy environments. The computation time results show that the AMDF-based speech classifier is computationally simple, and thus its computation time is less than that of other speech classifiers, while that of the NN-based speech classifier is greater compared with other classifiers.

A Study on Robust Feature Vector Extraction for Fault Detection and Classification of Induction Motor in Noise Circumstance (잡음 환경에서의 유도 전동기 고장 검출 및 분류를 위한 강인한 특징 벡터 추출에 관한 연구)

  • Hwang, Chul-Hee;Kang, Myeong-Su;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.187-196
    • /
    • 2011
  • Induction motors play a vital role in aeronautical and automotive industries so that many researchers have studied on developing a fault detection and classification system of an induction motor to minimize economical damage caused by its fault. With this reason, this paper extracts robust feature vectors from the normal/abnormal vibration signals of the induction motor in noise circumstance: partial autocorrelation (PARCOR) coefficient, log spectrum powers (LSP), cepstrum coefficients mean (CCM), and mel-frequency cepstrum coefficient (MFCC). Then, we classified different types of faults of the induction motor by using the extracted feature vectors as inputs of a neural network. To find optimal feature vectors, this paper evaluated classification performance with 2 to 20 different feature vectors. Experimental results showed that five to six features were good enough to give almost 100% classification accuracy except features by CCM. Furthermore, we considered that vibration signals could include noise components caused by surroundings. Thus, we added white Gaussian noise to original vibration signals, and then evaluated classification performance. The evaluation results yielded that LSP was the most robust in noise circumstance, then PARCOR and MFCC followed by LSP, respectively.

Speech Recognition for Vowel Detection using by Cepstrum Coefficients (켑스트럼 계수에 의한 모음검출을 위한 음성인식)

  • Choi, Jae-Seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.613-615
    • /
    • 2011
  • 본 논문에서는 켑스트럼 계수를 이용하여 음성인식을 하는 알고리즘을 제안한다. 본 논문에서 제안하는 방법은 사람이 발성한 음성을 두 영역의 켑스트럼 계수로 분리한 후에, 신경회로망을 사용하여 음성인식을 하는 방법이다. 본 논문에서 제안하는 신경회로망은 오차가 거의 없어지는 일정 기간 동안 네트워크를 학습시킨 후에 신경회로망의 학습 데이터와는 다른 새로운 음성이 신경회로망에 입력된 경우에 대하여 각 음성 구간에서 분류가 가능한 모음검출을 위한 음성인식 시스템을 제안한다.

  • PDF

Performance Analysis of Speech Parameters and a New Decision Logic for Speaker Recognition (화자인식을 위한 음성 요소들의 성능분석 및 새로운 판단 논리)

  • Lee, Hyuk-Jae;Lee, Byeong-Gi
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.146-156
    • /
    • 1989
  • This paper discusses how to choose speech parameters and decision logics to improve the performance of speaker recognition systems. It also considers the influence of the reference patterns on the speaker recognition. It is observed from the performance analysis based on LPSs, PARCOR coefficients and LPC-cepstrum coefficients that LPC-cepstrum coefficients are superior to the others in speaker recognition without regard to the reference patterns. In order to improve the recognition performance, a new decision logic is proposed based on a generalized-distance concept. It differs from the existing methods in that it considers the statistics of customer and impostors at the same time. It turns out from a speaker verification test that the proposed decision logic ferforms better than the existing ones.

  • PDF

A Study on the Spoken KOrean-Digit Recognition Using the Neural Netwok (神經網을 利用한 韓國語 數字音 認識에 관한 硏究)

  • Park, Hyun-Hwa;Gahang, Hae Dong;Bae, Keun Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.5-13
    • /
    • 1992
  • Taking devantage of the property that Korean digit is a mono-syllable word, we proposed a spoken Korean-digit recognition scheme using the multi-layer perceptron. The spoken Korean-digit is divided into three segments (initial sound, medial vowel, and final consonant) based on the voice starting / ending points and a peak point in the middle of vowel sound. The feature vectors such as cepstrum, reflection coefficients, ${\Delta}$cepstrum and ${\Delta}$energy are extracted from each segment. It has been shown that cepstrum, as an input vector to the neural network, gives higher recognition rate than reflection coefficients. Regression coefficients of cepstrum did not affect as much as we expected on the recognition rate. That is because, it is believed, we extracted features from the selected stationary segments of the input speech signal. With 150 ceptral coefficients obtained from each spoken digit, we achieved correct recognition rate of 97.8%.

  • PDF

A Study on Function Recognition of EMG Signal Using LPC Cepstrum Coefficients (LPC 켑스트럼 계수를 이용한 EMG 신호의 기능 인식에 관한 연구)

  • Wang, Sung-Moon;Chung, Tae-Yun;Choi, Yun-Ho;Byun, Youn-Shik;Park, Sang-Hui
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.2
    • /
    • pp.126-134
    • /
    • 1990
  • In this study, eight function discrimination and recognition of the EMG signal from the biceps and triceps of 4 subjects were executed, using the Euclidean and weighted cepstral distance measure with LPC cepstrum coefficients. In case of Euclidean cepstral distance measure, as the number of LPC cepstrum coefficients was increased in 8, 10, 12, 14 the recognition rates of functions are 94.69, 95.63, 96.56, and 96.88[%], respectively, but increasing rates of recognition were inclined to decrease. In case of weighted cepstral distance measure, when the number of LPC cepstrum coefficients was 8, 10, 12 and 14, the recognition rates of functions were 91.88, 95, 99.69, and 96.63[%], respectively.

  • PDF

A Comparison of Speech/Music Discrimination Features for Audio Indexing (오디오 인덱싱을 위한 음성/음악 분류 특징 비교)

  • 이경록;서봉수;김진영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.10-15
    • /
    • 2001
  • In this paper, we describe the comparison between the combination of features using a speech and music discrimination, which is classifying between speech and music on audio signals. Audio signals are classified into 3classes (speech, music, speech and music) and 2classes (speech, music). Experiments carried out on three types of feature, Mel-cepstrum, energy, zero-crossings, and try to find a best combination between features to speech and music discrimination. We using a Gaussian Mixture Model (GMM) for discrimination algorithm and combine different features into a single vector prior to modeling the data with a GMM. In 3classes, the best result is achieved using Mel-cepstrum, energy and zero-crossings in a single feature vector (speech: 95.1%, music: 61.9%, speech & music: 55.5%). In 2classes, the best result is achieved using Mel-cepstrum, energy and Mel-cepstrum, energy, zero-crossings in a single feature vector (speech: 98.9%, music: 100%).

  • PDF