• 제목/요약/키워드: Speech feature

검색결과 710건 처리시간 0.024초

음성 신호 특징과 셉스트럽 특징 분포에서 묵음 특징 정규화를 융합한 음성 인식 성능 향상 (Voice Recognition Performance Improvement using the Convergence of Voice signal Feature and Silence Feature Normalization in Cepstrum Feature Distribution)

  • 황재천
    • 한국융합학회논문지
    • /
    • 제8권5호
    • /
    • pp.13-17
    • /
    • 2017
  • 음성 인식에서 기존의 음성 특징 추출 방법은 명확하지 않은 스레숄드 값으로 인해 부정확한 음성 인식률을 가진다. 본 연구에서는 음성과 비음성에 대한 특징 추출을 묵음 특징 정규화를 융합한 음성 인식 성능 향상을 위한 방법을 모델링 한다. 제안한 방법에서는 잡음의 영향을 최소화하여 모델을 구성하였고, 각 음성 프레임에 대해 음성 신호 특징을 추출하여 음성 인식 모델을 구성하였고, 이를 묵음 특징 정규화를 융합하여 에너지 스펙트럼을 엔트로피와 유사하게 표현하여 원래의 음성 신호를 생성하고 음성의 특징이 잡음을 적게 받도록 하였다. 셉스트럼에서 음성과 비음성 분류의 기준 값을 정하여 신호 대 잡음 비율이 낮은 신호에서 묵음 특징 정규화로 성능을 향상하였다. 논문에서 제시하는 방법의 성능 분석은 HMM과 CHMM을 비교하여 결과를 보였으며, 기존의 HMM과 CHMM을 비교한 결과 음성 종속 단계에서는 2.1%p의 인식률 향상이 있었으며, 음성 독립 단계에서는 0.7%p 만큼의 인식률 향상이 있었다.

잡음 환경에서의 음성 감정 인식을 위한 특징 벡터 처리 (Feature Vector Processing for Speech Emotion Recognition in Noisy Environments)

  • 박정식;오영환
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.77-85
    • /
    • 2010
  • This paper proposes an efficient feature vector processing technique to guard the Speech Emotion Recognition (SER) system against a variety of noises. In the proposed approach, emotional feature vectors are extracted from speech processed by comb filtering. Then, these extracts are used in a robust model construction based on feature vector classification. We modify conventional comb filtering by using speech presence probability to minimize drawbacks due to incorrect pitch estimation under background noise conditions. The modified comb filtering can correctly enhance the harmonics, which is an important factor used in SER. Feature vector classification technique categorizes feature vectors into either discriminative vectors or non-discriminative vectors based on a log-likelihood criterion. This method can successfully select the discriminative vectors while preserving correct emotional characteristics. Thus, robust emotion models can be constructed by only using such discriminative vectors. On SER experiment using an emotional speech corpus contaminated by various noises, our approach exhibited superior performance to the baseline system.

  • PDF

Noisy Speech Recognition Based on Noise-Adapted HMMs Using Speech Feature Compensation

  • Chung, Yong-Joo
    • 융합신호처리학회논문지
    • /
    • 제15권2호
    • /
    • pp.37-41
    • /
    • 2014
  • The vector Taylor series (VTS) based method usually employs clean speech Hidden Markov Models (HMMs) when compensating speech feature vectors or adapting the parameters of trained HMMs. It is well-known that noisy speech HMMs trained by the Multi-condition TRaining (MTR) and the Multi-Model-based Speech Recognition framework (MMSR) method perform better than the clean speech HMM in noisy speech recognition. In this paper, we propose a method to use the noise-adapted HMMs in the VTS-based speech feature compensation method. We derived a novel mathematical relation between the train and the test noisy speech feature vector in the log-spectrum domain and the VTS is used to estimate the statistics of the test noisy speech. An iterative EM algorithm is used to estimate train noisy speech from the test noisy speech along with noise parameters. The proposed method was applied to the noise-adapted HMMs trained by the MTR and MMSR and could reduce the relative word error rate significantly in the noisy speech recognition experiments on the Aurora 2 database.

Selecting Good Speech Features for Recognition

  • Lee, Young-Jik;Hwang, Kyu-Woong
    • ETRI Journal
    • /
    • 제18권1호
    • /
    • pp.29-41
    • /
    • 1996
  • This paper describes a method to select a suitable feature for speech recognition using information theoretic measure. Conventional speech recognition systems heuristically choose a portion of frequency components, cepstrum, mel-cepstrum, energy, and their time differences of speech waveforms as their speech features. However, these systems never have good performance if the selected features are not suitable for speech recognition. Since the recognition rate is the only performance measure of speech recognition system, it is hard to judge how suitable the selected feature is. To solve this problem, it is essential to analyze the feature itself, and measure how good the feature itself is. Good speech features should contain all of the class-related information and as small amount of the class-irrelevant variation as possible. In this paper, we suggest a method to measure the class-related information and the amount of the class-irrelevant variation based on the Shannon's information theory. Using this method, we compare the mel-scaled FFT, cepstrum, mel-cepstrum, and wavelet features of the TIMIT speech data. The result shows that, among these features, the mel-scaled FFT is the best feature for speech recognition based on the proposed measure.

  • PDF

신경 회로망을 이용한 음성 신호의 벡터 양자화 (Speech Signal Vector Quantization Using Neural Network)

  • 백승복;김상희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 추계종합학술대회 논문집
    • /
    • pp.1015-1018
    • /
    • 1999
  • This paper describes a vector quantization for speech signal coding using neural networks. We processed speech signal using LPC method that extracts speech signal feature, and speech signal feature is quantized using competitive neural network kohonen self-organization feature map.

  • PDF

목소리 특성과 음성 특징 파라미터의 상관관계와 SVM을 이용한 특성 분류 모델링 (Correlation analysis of voice characteristics and speech feature parameters, and classification modeling using SVM algorithm)

  • 박태성;권철홍
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.91-97
    • /
    • 2017
  • This study categorizes several voice characteristics by subjective listening assessment, and investigates correlation between voice characteristics and speech feature parameters. A model was developed to classify voice characteristics into the defined categories using SVM algorithm. To do this, we extracted various speech feature parameters from speech database for men in their 20s, and derived statistically significant parameters correlated with voice characteristics through ANOVA analysis. Then, these derived parameters were applied to the proposed SVM model. The experimental results showed that it is possible to obtain some speech feature parameters significantly correlated with the voice characteristics, and that the proposed model achieves the classification accuracies of 88.5% on average.

음성구간 검출기의 실시간 적응화를 위한 음성 특징벡터의 차원 축소 방법 (Dimension Reduction Method of Speech Feature Vector for Real-Time Adaptation of Voice Activity Detection)

  • 박진영;이광석;허강인
    • 융합신호처리학회논문지
    • /
    • 제7권3호
    • /
    • pp.116-121
    • /
    • 2006
  • 본 논문에서는 다양한 잡음환경에서의 실시간 적응화 기법을 적용하기 위한 선결 과제로 다차원 음성 특정 벡터를 저차원으로 축소하는 방법을 제안한다. 제안된 방법은 특징 벡터를 확률 우도 값으로 매핑시켜 비선형적으로 축소하는 방법으로 음성 / 비음성의 분류는 우도비 검증 (Likelihood Ratio Test; LRT) 을 이용하여 분류하였다. 실험 결과 고차원 특징 벡터를 이용하여 분류한 결과와 대등하게 분류됨을 확인할 수 있었다. 그리고, 제안된 방법에 의해 검출된 음성 데이터를 이용한 음성인식 실험에서도 10차 MFCC(Mel-Frequency Cepstral Coefficient)를 사용하여 분류한 경우와 대등한 인식률을 보여주었다.

  • PDF

음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터 (Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech)

  • 김정민;배건성
    • 대한음성학회지:말소리
    • /
    • 제61호
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • 제35권1호
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

음성특성 학습 모델을 이용한 음성인식 시스템의 성능 향상 (Improvement of Speech Recognition System Using the Trained Model of Speech Feature)

  • 송점동
    • 정보학연구
    • /
    • 제3권4호
    • /
    • pp.1-12
    • /
    • 2000
  • 음성은 특성에 따라 고음성분이 강한 음성과 저음성분이 강한 음성으로 구분할 수 있다. 그러나 이제까지 음성인식의 연구에 있어서는 이러한 특성을 고려하지 않고, 인식기를 구성함으로써 상대적으로 낮은 인식률과 인식모델을 구성할 때 많은 데이터를 필요로 하고 있다. 본 논문에서는 화자의 이러한 특성을 포만트 주파수를 이용하여 구분할 수 있는 방법을 제안하고, 화자음성의 고음과 저음특성을 반영하여 인식모델을 구성한 후 인식하는 방법을 제안한다. 한국어에서 가능한 47개의 모노폰을 이용하여 인식모델을 구성하였으며, 여성과 남성 각각 20명의 음성을 이용하여 인식모델을 학습시켰다. 포만트 주파수를 추출하여 구성한 포만트 주파수 테이불과 피치 정보값을 이용하여 음성의 특성을 구분한 후, 음성특성에 따라 학습된 인식모델을 이용하여 인식을 수행하였다. 본 논문에서 제안한 시스템을 이용하여 실험한 결과 기존의 방법보다 인식률이 향상됨을 보였다.

  • PDF