• 제목/요약/키워드: speaker-independent speech recognizer

검색결과 29건 처리시간 0.019초

Performance of Vocabulary-Independent Speech Recognizers with Speaker Adaptation

  • Kwon, Oh Wook;Un, Chong Kwan;Kim, Hoi Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권1E호
    • /
    • pp.57-63
    • /
    • 1997
  • In this paper, we investigated performance of a vocabulary-independent speech recognizer with speaker adaptation. The vocabulary-independent speech recognizer does not require task-oriented speech databases to estimate HMM parameters, but adapts the parameters recursively by using input speech and recognition results. The recognizer has the advantage that it relieves efforts to record the speech databases and can be easily adapted to a new task and a new speaker with different recognition vocabulary without losing recognition accuracies. Experimental results showed that the vocabulary-independent speech recognizer with supervised offline speaker adaptation reduced 40% of recognition errors when 80 words from the same vocabulary as test data were used as adaptation data. The recognizer with unsupervised online speaker adaptation reduced abut 43% of recognition errors. This performance is comparable to that of a speaker-independent speech recognizer trained by a task-oriented speech database.

  • PDF

TMS320F28335 DSP를 이용한 화자독립 음성인식기 구현 (Implementation of a Speaker-independent Speech Recognizer Using the TMS320F28335 DSP)

  • 정익주
    • 산업기술연구
    • /
    • 제29권A호
    • /
    • pp.95-100
    • /
    • 2009
  • In this paper, we implemented a speaker-independent speech recognizer using the TMS320F28335 DSP which is optimized for control applications. For this implementation, we used a small-sized commercial DSP module and developed a peripheral board including a codec, signal conditioning circuits and I/O interfaces. The speech signal digitized by the TLV320AIC23 codec is analyzed based on MFCC feature extraction methed and recognized using the continuous-density HMM. Thanks to the internal SRAM and flash memory on the TMS320F28335 DSP, we did not need any external memory devices. The internal flash memory contains ADPCM data for voice response as well as HMM data. Since the TMS320F28335 DSP is optimized for control applications, the recognizer may play a good role in the voice-activated control areas in aspect that it can integrate speech recognition capability and inherent control functions into the single DSP.

  • PDF

VQ와 GMM을 이용한 문맥독립 화자인식기의 성능 비교 (Performance comparison of Text-Independent Speaker Recognizer Using VQ and GMM)

  • 김성종;정훈;정익주
    • 음성과학
    • /
    • 제7권2호
    • /
    • pp.235-244
    • /
    • 2000
  • This paper was focused on realizing the text-independent speaker recognizer using the VQ and GMM algorithm and studying the characteristics of the speaker recognizers that adopt these two algorithms. Because it was difficult ascertain the effect two algorithms have on the speaker recognizer theoretically, we performed the recognition experiments using various parameters and, as the result of the experiments, we could show that GMM algorithm had better recognition performance than VQ algorithm as following. The GMM showed better performance with small training data, and it also showed just a little difference of recognition rate as the kind of feature vectors and the length of input data vary. The GMM showed good recognition performance than the VQ on the whole.

  • PDF

DSP를 이용한 자동차 소음에 강인한 음성인식기 구현 (Implementation of a Robust Speech Recognizer in Noisy Car Environment Using a DSP)

  • 정익주
    • 음성과학
    • /
    • 제15권2호
    • /
    • pp.67-77
    • /
    • 2008
  • In this paper, we implemented a robust speech recognizer using the TMS320VC33 DSP. For this implementation, we had built speech and noise database suitable for the recognizer using spectral subtraction method for noise removal. The recognizer has an explicit structure in aspect that a speech signal is enhanced through spectral subtraction before endpoints detection and feature extraction. This helps make the operation of the recognizer clear and build HMM models which give minimum model-mismatch. Since the recognizer was developed for the purpose of controlling car facilities and voice dialing, it has two recognition engines, speaker independent one for controlling car facilities and speaker dependent one for voice dialing. We adopted a conventional DTW algorithm for the latter and a continuous HMM for the former. Though various off-line recognition test, we made a selection of optimal conditions of several recognition parameters for a resource-limited embedded recognizer, which led to HMM models of the three mixtures per state. The car noise added speech database is enhanced using spectral subtraction before HMM parameter estimation for reducing model-mismatch caused by nonlinear distortion from spectral subtraction. The hardware module developed includes a microcontroller for host interface which processes the protocol between the DSP and a host.

  • PDF

음성을 이용한 화자 및 문장독립 감정인식 (Speaker and Context Independent Emotion Recognition using Speech Signal)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

GMM을 이용한 화자 및 문장 독립적 감정 인식 시스템 구현 (Speaker and Context Independent Emotion Recognition System using Gaussian Mixture Model)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2463-2466
    • /
    • 2003
  • This paper studied the pattern recognition algorithm and feature parameters for emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used lot speaker and context independent recognition. The speech parameters used as the feature are pitch, energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and their derivatives as a feature showed better performance than that using the Pitch and energy Parameters. For pattern recognition algorithm, GMM based emotion recognizer was superior to KNN and VQ based recognizer

  • PDF

독립성분분석을 이용한 DSP 기반의 화자 독립 음성 인식 시스템의 구현 (Implementation of Speaker Independent Speech Recognition System Using Independent Component Analysis based on DSP)

  • 김창근;박진영;박정원;이광석;허강인
    • 한국정보통신학회논문지
    • /
    • 제8권2호
    • /
    • pp.359-364
    • /
    • 2004
  • 본 논문에서는 범용 디지털 신호처리기를 이용한 잡음환경에 강인한 실시간 화자 독립 음성인식 시스템을 구현하였다. 구현된 시스템은 TI사의 범용 부동소수점 디지털 신호처리기인 TMS320C32를 이용하였고, 실시간 음성 입력을 위한 음성 CODEC과 외부 인터페이스를 확장하여 인식결과를 출력하도록 구성하였다. 실시간 음성 인식기에 사용한 음성특징 파라메터는 일반적으로 사용되어 지는 MFCC(Mel Frequency Cepstral Coefficient)대신 독립성분분석을 통해 MFCC의 특징 공간을 변화시킨 파라메터를 사용하여 외부잡음 환경에 강인한 특성을 지니도록 하였다. 두 가지 특징 파라메터에 대해 잡음 환경에서의 인식실험 결과, 독립성분 분석에 의한 특징 파라메터의 인식 성능이 MFCC보다 우수함을 확인 할 수 있었다.

음성 신호를 사용한 GMM기반의 감정 인식 (GMM-based Emotion Recognition Using Speech Signal)

  • 서정태;김원구;강면구
    • 한국음향학회지
    • /
    • 제23권3호
    • /
    • pp.235-241
    • /
    • 2004
  • 본 논문은 화자 및 문장 독립적 감정 인식을 위한 특징 파라메터와 패턴인식 알고리즘에 관하여 연구하였다. 본 논문에서는 기존 감정 인식 방법과의 비교를 위하여 KNN을 이용한 알고리즘을 사용하였고, 화자 및 문장 독립적 감정 인식을 위하여 VQ와 GMM을 이용한 알고리즘을 사용하였다. 그리고 특징으로 사용한 음성 파라메터로 피치, 에너지, MFCC, 그리고 그것들의 1, 2차 미분을 사용하였다. 실험을 통해 피치와 에너지 파라메터를 사용하였을 때보다 MFCC와 그 미분들을 특징 파라메터로 사용하였을 때 더 좋은 감정 인식 성능을 보였으며, KNN과 VQ보다 GMM을 기반으로 한 인식 알고리즘이 화자 및 문장 독립적 감정 인식 시스템에서 보다 적합하였다.

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

스무고개 게임을 위한 음성인식 (Speech Recognition for twenty questions game)

  • 노용완;윤재선;홍광석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.203-206
    • /
    • 2002
  • In this paper, we present a sentence speech recognizer for twenty questions game. The proposed approaches for speaker-independent sentence speech recognition can be divided into two steps. One is extraction of the number of syllables in eojeol for candidate reduction, and the other is knowledge based language model for sentence recognition. For twenty questions game, we implemented speech recognizer using 956 sentences and 1095 eojeols. The results obtained in our experiments were 87% sentence recognition rate and 90.15% eojeol recognition rate.

  • PDF