• Title/Summary/Keyword: LPC Coefficients

Search Result 79, Processing Time 0.025 seconds

A study on the text-dependent speaker recognition system Using a robust matching process (강인한 정합과정을 이용한 텍스트 종속 화자인식에 관한 연구)

  • Lee, Han-Ku;Lee, Kee-Seong
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.605-608
    • /
    • 2002
  • A text-dependent speaker recognition system using a robust matching process is studied. The feature histogram of LPC cepstral coefficients for matching is used. The matching process uses mixture network with penalty scores. Using probability and shape comparison of two feature histograms, similarity values are obtained. The experiment results will be shown to show the effectiveness of the proposed algorithm.

  • PDF

A Study on EMG Functional Recognition Vsing Reduced-Connection Network (연결 축소 회로망을 이용한 EMG 신호 기능 인식에 관한 연구)

  • 조정호;최윤호
    • Journal of Biomedical Engineering Research
    • /
    • v.11 no.2
    • /
    • pp.249-256
    • /
    • 1990
  • In this study, LPC cepstrum coefficients are used as feature vector extracted from AR model of EMG signal, and a reduced-connection network whlch has reduced connection between nodes is constructed to classify and recognize EMG functional classes. The proposed network reduces learning time and improves system stability Therefore it is Ehown that the proposed network is appropriate in recognizing function of EMG signal.

  • PDF

IMPLEMENTATION OF REAL TIME RELP VOCODER ON THE TMS320C25 DSP CHIP

  • Kwon, Kee-Hyeon;Chong, Jong-Wha
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.957-962
    • /
    • 1994
  • Real-time RELP vocoder is implemented on the TMS320C25 DSP chip. The implemented system is IBM-PC add-on board and composed of analog in/out unit, DSP unit, memoy unit, IBM-PC interface unit and its supporting assembly software. Speech analyzer and synthesizer is implimented by DSP assembly software. Speech parameters such as LPC coefficients, base-band residuals, and signal gains is extracted by autocorrelation method and inverse filter and synthesized by spectral folding method and direct form synthesis filter in this board. And then, real-time RELP vocoder with 9.6Kbps is simulated by down-loading method in the DSP program RAM.

  • PDF

A study on Effective Feature Parameters Comparison for Speaker Recognition (화자인식에 효과적인 특징벡터에 관한 비교연구)

  • Park TaeSun;Kim Sang-Jin;Kwang Moon;Hahn Minsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.145-148
    • /
    • 2003
  • In this paper, we carried out comparative study about various feature parameters for the effective speaker recognition such as LPC, LPCC, MFCC, Log Area Ratio, Reflection Coefficients, Inverse Sine, and Delta Parameter. We also adopted cepstral liftering and cepstral mean subtraction methods to check their usefulness. Our recognition system is HMM based one with 4 connected-Korean-digit speech database. Various experimental results will help to select the most effective parameter for speaker recognition.

  • PDF

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

A Method For Improvement Of Split Vector Quantization Of The ISF Parameters Using Adaptive Extended Codebook (적응적인 확장된 코드북을 이용한 분할 벡터 양자화기 구조의 ISF 양자화기 개선)

  • Lim, Jong-Ha;Jeong, Gyu-Hyeok;Hong, Gi-Bong;Lee, In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • This paper presents a method for improving the performance of ISF coefficients quantizer through compensating the defect of the split structure vector quantization using the ordering property of ISF coefficients. And design the ISF coefficients quantizer for wideband speech codec using proposed method. The wideband speech codec uses split structure vector quantizer which could not use the correlation between ISF coefficients fully to reduce complexity and the size of codebook. The proposed algorithm uses the ordering property of ISF coefficients to overcome the defect. Using the ordering property, the codebook redundancy could be figured out. The codebook redundancy is replaced by the adaptive-extended codebook to improve the performance of the quantizer through using the ordering property, ISF coefficient prediction and interpolation of existing codebook. As a result, the proposed algorithm shows that the adaptive-extended codebook algorithm could get about 2 bit gains in comparison with the existing split structure ISF quantizer of AMR-WB (G.722.2) in the points of spectral distortion.

A Study on Classification of Four Emotions using EEG (뇌파를 이용한 4가지 감정 분류에 관한 연구)

  • 강동기;김동준;김흥환;고한우
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.11a
    • /
    • pp.87-90
    • /
    • 2001
  • 본 연구에서는 감성 평가 시스템에 가장 적합한 파라미터를 찾기 위하여 3가지 뇌파 파라미터를 이용하여 감정 분류 실험을 하였다. 뇌파 파라미터는 선형예측기계수(linear predictor coefficients)와 FFT 스펙트럼 및 AR 스펙트럼의 밴드별 상호상관계수(cross-correlation coefficients)를 이용하였으며, 감정은 relaxation, joy, sadness, irritation으로 설정하였다. 뇌파 데이터는 대학의 연극동아리 학생 4명을 대상으로 수집하였으며, 전극 위치는 Fp1, Fp2, F3, F4, T3, T4, P3, P4, O1, O2를 사용하였다. 수집된 뇌파 데이터는 전처리를 거친 후 특징 파라미터를 추출하고 패턴 분류기로 사용된 신경회로망(neural network)에 입력하여 감정 분류를 하였다. 감정 분류실험 결과 선형예측기계수를 이용하는 것이 다른 2가지 보다 좋은 성능을 나타내었다.

  • PDF

Korean Digit Recognition Using Cepstrum coefficients and Frequency Sensitive Competitive Learning (Cepstrum 계수와 Frequency Sensitive Competitive Learning 신경회로망을 이용한 한국어 인식.)

  • Lee, Su-Hyuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.329-331
    • /
    • 1994
  • In this paper, we present a speaker-dependent Korean Isolated digit recognition system. At the preprocessing step, LPC cepstral coefficients are extracted from speech signal, and are used as the input of a Frequency Sensitive Competitive Learning(FSCL) neural network. We carried out the postprocessing based on the winning-neuron histogram. Experimetal results Indicate the possibility of commercial auto-dial telephones.

  • PDF

Isolated Word Recognition using Modified Dynamic Averaging Method (변형된 Dynamic Averaging 방법을 이용한 단독어인식)

  • Jeoung, Eui-Bung;Ko, Young-Hyuk;Lee, Jong-Arc
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.23-28
    • /
    • 1991
  • This paper is a study on isolated word recognition by independent speaker, we propose DTW speech recognition system by modified dynamic averaging method as reference pattern. 57 city names are selected as recognition vocabulary and 2th LPC cepstrum coefficients are used as the feature parameter. In this paper, besides recognition experiment using modified dynamic averaging method as reference pattern, we perform recognition experiments using causal method, dynamic averaging method, linear averaging method and clustering method with the same data in the same conditions for comparison with it. Through the experiment result, it is proved that recogntion rate by DTW using modified dynamic averaging method is the best as 97.6 percent.

  • PDF

Speaker Identification Based on Incremental Learning Neural Network

  • Heo, Kwang-Seung;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.76-82
    • /
    • 2005
  • Speech signal has various features of speakers. This feature is extracted from speech signal processing. The speaker is identified by the speaker identification system. In this paper, we propose the speaker identification system that uses the incremental learning based on neural network. Recorded speech signal through the microphone is blocked to the frame of 1024 speech samples. Energy is divided speech signal to voiced signal and unvoiced signal. The extracted 12 orders LPC cpestrum coefficients are used with input data for neural network. The speakers are identified with the speaker identification system using the neural network. The neural network has the structure of MLP which consists of 12 input nodes, 8 hidden nodes, and 4 output nodes. The number of output node means the identified speakers. The first output node is excited to the first speaker. Incremental learning begins when the new speaker is identified. Incremental learning is the learning algorithm that already learned weights are remembered and only the new weights that are created as adding new speaker are trained. It is learning algorithm that overcomes the fault of neural network. The neural network repeats the learning when the new speaker is entered to it. The architecture of neural network is extended with the number of speakers. Therefore, this system can learn without the restricted number of speakers.