• 제목/요약/키워드: Speaker Independent Speech Recognition

검색결과 146건 처리시간 0.025초

Performance of Vocabulary-Independent Speech Recognizers with Speaker Adaptation

  • Kwon, Oh Wook;Un, Chong Kwan;Kim, Hoi Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권1E호
    • /
    • pp.57-63
    • /
    • 1997
  • In this paper, we investigated performance of a vocabulary-independent speech recognizer with speaker adaptation. The vocabulary-independent speech recognizer does not require task-oriented speech databases to estimate HMM parameters, but adapts the parameters recursively by using input speech and recognition results. The recognizer has the advantage that it relieves efforts to record the speech databases and can be easily adapted to a new task and a new speaker with different recognition vocabulary without losing recognition accuracies. Experimental results showed that the vocabulary-independent speech recognizer with supervised offline speaker adaptation reduced 40% of recognition errors when 80 words from the same vocabulary as test data were used as adaptation data. The recognizer with unsupervised online speaker adaptation reduced abut 43% of recognition errors. This performance is comparable to that of a speaker-independent speech recognizer trained by a task-oriented speech database.

  • PDF

음성을 이용한 화자 및 문장독립 감정인식 (Speaker and Context Independent Emotion Recognition using Speech Signal)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

Review And Challenges In Speech Recognition (ICCAS 2005)

  • Ahmed, M.Masroor;Ahmed, Abdul Manan Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1705-1709
    • /
    • 2005
  • This paper covers review and challenges in the area of speech recognition by taking into account different classes of recognition mode. The recognition mode can be either speaker independent or speaker dependant. Size of the vocabulary and the input mode are two crucial factors for a speech recognizer. The input mode refers to continuous or isolated speech recognition system and the vocabulary size can be small less than hundred words or large less than few thousands words. This varies according to system design and objectives.[2]. The organization of the paper is: first it covers various fundamental methods of speech recognition, then it takes into account various deficiencies in the existing systems and finally it discloses the various probable application areas.

  • PDF

위너필터법이 적용된 MFCC의 파라미터 추출에 기초한 화자독립 인식알고리즘 (Speaker Independent Recognition Algorithm based on Parameter Extraction by MFCC applied Wiener Filter Method)

  • 최재승
    • 한국정보통신학회논문지
    • /
    • 제21권6호
    • /
    • pp.1149-1154
    • /
    • 2017
  • 배경잡음 하에서 음성인식 시스템의 우수한 인식성능을 얻기 위해서 적절한 음성의 특징 파라미터를 선택하는 것이 매우 중요하다. 본 논문에서 사용한 특징 파라미터는 위너필터 방법이 적용된 인간의 청각 특성을 이용한 멜 주파수 켑스트럼 계수(Mel frequency cepstral coefficient, MFCC)를 사용한다. 즉, 본 논문에서 제안하는 특징 파라미터는 배경잡음을 제거한 후에 깨끗한 음성신호의 파라미터를 추출하는 새로운 방법이다. 제안한 수정된 MFCC 특징 파라미터를 다층 퍼셉트론 네트워크에 입력하여 학습시킴으로써 화자인식을 구현한다. 본 실험에서는 14차의 MFCC 특징 파라미터를 사용하여 화자독립 인식실험을 실시하였으며, 백색잡음이 혼합된 경우의 음성의 화자독립인식률은 평균 94.48%로 효과적인 결과를 구할 수 있었다. 본 논문에서 제안한 방법과 기존의 방법들을 비교하였을 때 본 논문에서 제안한 화자인식 성능이 수정된 MFCC 특징 파라미터를 사용함으로써 향상되었다.

음소별 GMM을 이용한 화자식별 (Speaker Identification using Phonetic GMM)

  • 권석봉;김회린
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.185-188
    • /
    • 2003
  • In this paper, we construct phonetic GMM for text-independent speaker identification system. The basic idea is to combine of the advantages of baseline GMM and HMM. GMM is more proper for text-independent speaker identification system. In text-dependent system, HMM do work better. Phonetic GMM represents more sophistgate text-dependent speaker model based on text-independent speaker model. In speaker identification system, phonetic GMM using HMM-based speaker-independent phoneme recognition results in better performance than baseline GMM. In addition to the method, N-best recognition algorithm used to decrease the computation complexity and to be applicable to new speakers.

  • PDF

독립성분분석을 이용한 DSP 기반의 화자 독립 음성 인식 시스템의 구현 (Implementation of Speaker Independent Speech Recognition System Using Independent Component Analysis based on DSP)

  • 김창근;박진영;박정원;이광석;허강인
    • 한국정보통신학회논문지
    • /
    • 제8권2호
    • /
    • pp.359-364
    • /
    • 2004
  • 본 논문에서는 범용 디지털 신호처리기를 이용한 잡음환경에 강인한 실시간 화자 독립 음성인식 시스템을 구현하였다. 구현된 시스템은 TI사의 범용 부동소수점 디지털 신호처리기인 TMS320C32를 이용하였고, 실시간 음성 입력을 위한 음성 CODEC과 외부 인터페이스를 확장하여 인식결과를 출력하도록 구성하였다. 실시간 음성 인식기에 사용한 음성특징 파라메터는 일반적으로 사용되어 지는 MFCC(Mel Frequency Cepstral Coefficient)대신 독립성분분석을 통해 MFCC의 특징 공간을 변화시킨 파라메터를 사용하여 외부잡음 환경에 강인한 특성을 지니도록 하였다. 두 가지 특징 파라메터에 대해 잡음 환경에서의 인식실험 결과, 독립성분 분석에 의한 특징 파라메터의 인식 성능이 MFCC보다 우수함을 확인 할 수 있었다.

감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식 (Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot)

  • 김은호
    • 한국지능시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.755-759
    • /
    • 2009
  • 인간의 감정을 인식하는 기술은 인간-로봇 상호작용 분야의 중요한 연구주제 중 하나이다. 특히, 화자독립 감정인식은 음성감정인식의 상용화를 위해 꼭 필요한 중요한 이슈이다. 일반적으로, 화자독립 감정인식 시스템은 화자종속 시스템과 비교하여 감정특징 값들의 화자 그리고 성별에 따른 변화로 인하여 낮은 인식률을 보인다. 따라서 본 논문에서는 신뢰도 평가방법을 이용한 감정인식결과의 거절 방법을 사용하여 화자독립 감정인식 시스템을 일관되고 정확하게 구현할 수 있는 방법을 제시한다. 또한, 제안된 방법과 기존 방법의 비교를 통하여 제안된 방법의 효율성 및 가능성을 검증한다.

부가 주성분분석을 이용한 미지의 환경에서의 화자식별 (Speaker Identification Using Augmented PCA in Unknown Environments)

  • 유하진
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.73-83
    • /
    • 2005
  • The goal of our research is to build a text-independent speaker identification system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severely degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(principal component analysis) can improve the performance in the situation. We also propose an augmented PCA process, which augments class discriminative information to the original feature vectors before PCA transformation and selects the best direction for each pair of highly confusable speakers. The proposed method reduced the relative recognition error by 21%.

  • PDF

Speaker Adaptation Using i-Vector Based Clustering

  • Kim, Minsoo;Jang, Gil-Jin;Kim, Ji-Hwan;Lee, Minho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권7호
    • /
    • pp.2785-2799
    • /
    • 2020
  • We propose a novel speaker adaptation method using acoustic model clustering. The similarity of different speakers is defined by the cosine distance between their i-vectors (intermediate vectors), and various efficient clustering algorithms are applied to obtain a number of speaker subsets with different characteristics. The speaker-independent model is then retrained with the training data of the individual speaker subsets grouped by the clustering results, and an unknown speech is recognized by the retrained model of the closest cluster. The proposed method is applied to a large-scale speech recognition system implemented by a hybrid hidden Markov model and deep neural network framework. An experiment was conducted to evaluate the word error rates using Resource Management database. When the proposed speaker adaptation method using i-vector based clustering was applied, the performance, as compared to that of the conventional speaker-independent speech recognition model, was improved relatively by as much as 12.2% for the conventional fully neural network, and by as much as 10.5% for the bidirectional long short-term memory.

VQ/HMM에 의한 화자독립 음성인식에서 다수 후보자를 인식 대상으로 제출하는 방법에 관한 연구 (A Study on the Submission of Multiple Candidates for Decision in Speaker-Independent Speech Recognition by VQ/HMM)

  • 이창영;남호수
    • 음성과학
    • /
    • 제12권3호
    • /
    • pp.115-124
    • /
    • 2005
  • We investigated on the submission of multiple candidates in speaker-independent speech recognition by VQ/HMM. Submission of fixed number of multiple candidates has first been examined. As the number of candidates increases by two, three, and four, the recognition error rates were found to decrease by 41%, 58%, and 65%, respectively compared to that of a single candidate. We tried another approach that the candidates within a range of Viterbi scores are submitted. The number of candidates showed geometric increase as the admitted range becomes large. For a practical application, a combination of the above two methods was also studied. We chose the candidates within some range of Viterbi scores and limited the maximum number of candidates submitted to five. Experimental results showed that recognition error rates of less than 10% could be achieved with average number of candidates of 3.2 by this method.

  • PDF