• 제목/요약/키워드: Speech Feature Analysis

검색결과 177건 처리시간 0.027초

음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터 (Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech)

  • 김정민;배건성
    • 대한음성학회지:말소리
    • /
    • 제61호
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

목소리 특성과 음성 특징 파라미터의 상관관계와 SVM을 이용한 특성 분류 모델링 (Correlation analysis of voice characteristics and speech feature parameters, and classification modeling using SVM algorithm)

  • 박태성;권철홍
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.91-97
    • /
    • 2017
  • This study categorizes several voice characteristics by subjective listening assessment, and investigates correlation between voice characteristics and speech feature parameters. A model was developed to classify voice characteristics into the defined categories using SVM algorithm. To do this, we extracted various speech feature parameters from speech database for men in their 20s, and derived statistically significant parameters correlated with voice characteristics through ANOVA analysis. Then, these derived parameters were applied to the proposed SVM model. The experimental results showed that it is possible to obtain some speech feature parameters significantly correlated with the voice characteristics, and that the proposed model achieves the classification accuracies of 88.5% on average.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF

음성 신호 특징과 셉스트럽 특징 분포에서 묵음 특징 정규화를 융합한 음성 인식 성능 향상 (Voice Recognition Performance Improvement using the Convergence of Voice signal Feature and Silence Feature Normalization in Cepstrum Feature Distribution)

  • 황재천
    • 한국융합학회논문지
    • /
    • 제8권5호
    • /
    • pp.13-17
    • /
    • 2017
  • 음성 인식에서 기존의 음성 특징 추출 방법은 명확하지 않은 스레숄드 값으로 인해 부정확한 음성 인식률을 가진다. 본 연구에서는 음성과 비음성에 대한 특징 추출을 묵음 특징 정규화를 융합한 음성 인식 성능 향상을 위한 방법을 모델링 한다. 제안한 방법에서는 잡음의 영향을 최소화하여 모델을 구성하였고, 각 음성 프레임에 대해 음성 신호 특징을 추출하여 음성 인식 모델을 구성하였고, 이를 묵음 특징 정규화를 융합하여 에너지 스펙트럼을 엔트로피와 유사하게 표현하여 원래의 음성 신호를 생성하고 음성의 특징이 잡음을 적게 받도록 하였다. 셉스트럼에서 음성과 비음성 분류의 기준 값을 정하여 신호 대 잡음 비율이 낮은 신호에서 묵음 특징 정규화로 성능을 향상하였다. 논문에서 제시하는 방법의 성능 분석은 HMM과 CHMM을 비교하여 결과를 보였으며, 기존의 HMM과 CHMM을 비교한 결과 음성 종속 단계에서는 2.1%p의 인식률 향상이 있었으며, 음성 독립 단계에서는 0.7%p 만큼의 인식률 향상이 있었다.

청각 모델에 기초한 음성 특징 추출에 관한 연구 (A study on the speech feature extraction based on the hearing model)

  • 김바울;윤석현;홍광석;박병철
    • 전자공학회논문지B
    • /
    • 제33B권4호
    • /
    • pp.131-140
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal precessing techniques. The proposed method includes following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using thediscrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digita recognition experiments were carried out using both the dTW and the VQ-HMM. The results showed that, in case of using dTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potentials to use as a simple and efficient feature for recognition task.

  • PDF

Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구 (A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter)

  • 이준환;이상범
    • 한국정보처리학회논문지
    • /
    • 제7권7호
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법 (Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm)

  • 주종태;장인훈;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

한국어 숫자음 전화음성의 채널왜곡에 따른 특징파라미터의 변이 분석 및 인식실험 (Analysis of Feature Parameter Variation for Korean Digit Telephone Speech according to Channel Distortion and Recognition Experiment)

  • 정성윤;손종목;김민성;배건성
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.179-188
    • /
    • 2002
  • Improving the recognition performance of connected digit telephone speech still remains a problem to be solved. As a basic study for it, this paper analyzes the variation of feature parameters of Korean digit telephone speech according to channel distortion. As a feature parameter for analysis and recognition MFCC is used. To analyze the effect of telephone channel distortion depending on each call, MFCCs are first obtained from the connected digit telephone speech for each phoneme included in the Korean digit. Then CMN, RTCN, and RASTA are applied to the MFCC as channel compensation techniques. Using the feature parameters of MFCC, MFCC+CMN, MFCC+RTCN, and MFCC+RASTA, variances of phonemes are analyzed and recognition experiments are done for each case. Experimental results are discussed with our findings and discussions

  • PDF

독립성분분석을 이용한 DSP 기반의 화자 독립 음성 인식 시스템의 구현 (Implementation of Speaker Independent Speech Recognition System Using Independent Component Analysis based on DSP)

  • 김창근;박진영;박정원;이광석;허강인
    • 한국정보통신학회논문지
    • /
    • 제8권2호
    • /
    • pp.359-364
    • /
    • 2004
  • 본 논문에서는 범용 디지털 신호처리기를 이용한 잡음환경에 강인한 실시간 화자 독립 음성인식 시스템을 구현하였다. 구현된 시스템은 TI사의 범용 부동소수점 디지털 신호처리기인 TMS320C32를 이용하였고, 실시간 음성 입력을 위한 음성 CODEC과 외부 인터페이스를 확장하여 인식결과를 출력하도록 구성하였다. 실시간 음성 인식기에 사용한 음성특징 파라메터는 일반적으로 사용되어 지는 MFCC(Mel Frequency Cepstral Coefficient)대신 독립성분분석을 통해 MFCC의 특징 공간을 변화시킨 파라메터를 사용하여 외부잡음 환경에 강인한 특성을 지니도록 하였다. 두 가지 특징 파라메터에 대해 잡음 환경에서의 인식실험 결과, 독립성분 분석에 의한 특징 파라메터의 인식 성능이 MFCC보다 우수함을 확인 할 수 있었다.

주파수 영역에서의 고립단어에 대한 음성 특징 추출 (Speech Feature Extraction for Isolated Word in Frequency Domain)

  • 조영훈;박은명;강홍석;박원배
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.81-84
    • /
    • 2000
  • In this paper, a new technology for extracting the feature of the speech signal of an isolated word by the analysis on the frequency domain is proposed. This technology can be applied efficiently for the limited speech domain. In order to extract the feature of speech signal, the number of peaks is calculated and the value of the frequency for a peak is used. Then the difference between the maximum peak and the second peak is also considered to identify the meanings among the words in the limited domain. By implementing this process hierarchically, the feature of speech signal can be extracted more quickly.

  • PDF