• 제목/요약/키워드: Speech Parameter

검색결과 373건 처리시간 0.02초

음성인식에서 입술 파라미터 열화에 따른 견인성 연구 (Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance)

  • 김진영;민소희;최승호
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.27-33
    • /
    • 2003
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods through lip reading experiments.

  • PDF

한국어 숫자음 전화음성의 채널왜곡에 따른 특징파라미터의 변이 분석 및 인식실험 (Analysis of Feature Parameter Variation for Korean Digit Telephone Speech according to Channel Distortion and Recognition Experiment)

  • 정성윤;손종목;김민성;배건성
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.179-188
    • /
    • 2002
  • Improving the recognition performance of connected digit telephone speech still remains a problem to be solved. As a basic study for it, this paper analyzes the variation of feature parameters of Korean digit telephone speech according to channel distortion. As a feature parameter for analysis and recognition MFCC is used. To analyze the effect of telephone channel distortion depending on each call, MFCCs are first obtained from the connected digit telephone speech for each phoneme included in the Korean digit. Then CMN, RTCN, and RASTA are applied to the MFCC as channel compensation techniques. Using the feature parameters of MFCC, MFCC+CMN, MFCC+RTCN, and MFCC+RASTA, variances of phonemes are analyzed and recognition experiments are done for each case. Experimental results are discussed with our findings and discussions

  • PDF

음성인식에서 입술 파라미터 열화에 따른 견인성 연구 (Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance)

  • 김진영;신도성;최승호
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.205-208
    • /
    • 2002
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods with lip reading experiments. Also, a measure of lip parameter degradation is proposed. This measure can be used in the determination of weighting values of video information.

  • PDF

Parts-Based Feature Extraction of Spectrum of Speech Signal Using Non-Negative Matrix Factorization

  • Park, Jeong-Won;Kim, Chang-Keun;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Journal of information and communication convergence engineering
    • /
    • 제1권4호
    • /
    • pp.209-212
    • /
    • 2003
  • In this paper, we proposed new speech feature parameter through parts-based feature extraction of speech spectrum using Non-Negative Matrix Factorization (NMF). NMF can effectively reduce dimension for multi-dimensional data through matrix factorization under the non-negativity constraints, and dimensionally reduced data should be presented parts-based features of input data. For speech feature extraction, we applied Mel-scaled filter bank outputs to inputs of NMF, than used outputs of NMF for inputs of speech recognizer. From recognition experiment result, we could confirm that proposed feature parameter is superior in recognition performance than mel frequency cepstral coefficient (MFCC) that is used generally.

개별 음향 정보를 이용한 화자 확인 알고리즘 성능향상 연구 (The Study for Advancing the Performance of Speaker Verification Algorithm Using Individual Voice Information)

  • 이재형;강선미
    • 음성과학
    • /
    • 제9권4호
    • /
    • pp.253-263
    • /
    • 2002
  • In this paper, we propose new algorithm of speaker recognition which identifies the speaker using the information obtained by the intensive speech feature analysis such as pitch, intensity, duration, and formant, which are crucial parameters of individual voice, for candidates of high percentage of wrong recognition in the existing speaker recognition algorithm. For testing the power of discrimination of individual parameter, DTW (Dynamic Time Warping) is used. We newly set the range of threshold which affects the power of discrimination in speech verification such that the candidates in the new range of threshold are finally discriminated in the next stage of sound parameter analysis. In the speaker verification test by using voice DB which consists of secret words of 25 males and 25 females of 8 kHz 16 bit, the algorithm we propose shows about 1% of performance improvement to the existing algorithm.

  • PDF

청각장애자용 발음훈련기기 개발에 관한 연구 (A study on speech training aids for Deafs)

  • 안상필;이재혁;윤태성;박상희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1990년도 하계학술대회 논문집
    • /
    • pp.47-50
    • /
    • 1990
  • Deafs cannot speak straight voice as normal people in lack of feedback of their pronunciation, therefore speech training is required. In this study, fundamental frequency, intensity, formant frequencies, vocal tract graphic and vocal tract area function, extracted from speech signal, are used as feature parameter. AR model, whose coefficients are extracted using inverse filtering. is used as speech generation model. In connect ion between vocal tract graphic and speech parameter, articulation distances and articulation distance functions in selected 15-intervals are determined by extracted vocal tract areas and formant frequencies.

  • PDF

Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류 (Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum)

  • 배명수;안수길
    • 한국음향학회지
    • /
    • 제4권1호
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

특징 선택과 융합 방법을 이용한 음성 감정 인식 (Speech Emotion Recognition using Feature Selection and Fusion Method)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권8호
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

Line Spectral Frequency와 음성신호의 주파수 분포에 관한 연구 (A Study on the Relation Between the LSF's and Spectral Distribution of Speech Signals)

  • 이동수;김영화
    • 대한전자공학회논문지
    • /
    • 제25권4호
    • /
    • pp.430-436
    • /
    • 1988
  • LSF(Line Spectral Frequency) derived from LPC has known as a very useful transmission parameter of speech signals, for it has a good linear interpolation characteristics and a low spectrum distortion at low bit rates coding. This paper presents that it is possible to extract directly the formant frequencies of speech signals from LSF parameter without application of FFT algorithm by comparing the distribution of LSF parameter with the frequency distribution of analysis filter. This paper suggests the advanced algorithm that results in improving the speed of convergence at analytic solution method. Also, for the flexibility of parameters, the process that transforms from LSF to LPC is presented.

  • PDF

Robust Entropy Based Voice Activity Detection Using Parameter Reconstruction in Noisy Environment

  • Han, Hag-Yong;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Journal of information and communication convergence engineering
    • /
    • 제1권4호
    • /
    • pp.205-208
    • /
    • 2003
  • Voice activity detection is a important problem in the speech recognition and speech communication. This paper introduces new feature parameter which are reconstructed by spectral entropy of information theory for robust voice activity detection in the noise environment, then analyzes and compares it with energy method of voice activity detection and performance. In experiments, we confirmed that spectral entropy and its reconstructed parameter are superior than the energy method for robust voice activity detection in the various noise environment.