• 제목/요약/키워드: Speaker recognition

검색결과 554건 처리시간 0.025초

On Speaker Adaptations with Sparse Training Data for Improved Speaker Verification

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.31-37
    • /
    • 2000
  • This paper concerns effective speaker adaptation methods to solve the over-training problem in speaker verification, which frequently occurs when modeling a speaker with sparse training data. While various speaker adaptations have already been applied to speech recognition, these methods have not yet been formally considered in speaker verification. This paper proposes speaker adaptation methods using a combination of MAP and MLLR adaptations, which are successfully used in speech recognition, and applies to speaker verification. Experimental results show that the speaker verification system using a weighted MAP and MLLR adaptation outperforms that of the conventional speaker models without adaptation by a factor of up to 5 times. From these results, we show that the speaker adaptation method achieves significantly better performance even when only small training data is available for speaker verification.

  • PDF

Lyapunov 차원을 이용한 화자식별 파라미터 추정 (Estimation of Speeker Recognition Parameter using Lyapunov Dimension)

  • 유병욱;김창석
    • 한국음향학회지
    • /
    • 제16권4호
    • /
    • pp.42-48
    • /
    • 1997
  • 본 논문에서는 음성을 비선형 결정론적 발생메카니즘에서 발생되는 불규칙한 신호인 카오스로 보고 상관차원과 Lyapunov 차원을 구함으로써 음성화자식별 파라미터와 음성인식파라미터에 대한 성능을 평가하였다. Taken의 매립정리를 이용하여 스트레인지 어트렉터를 구성할 때 AR모델의 파워스펙트럼으로부터 주요주기를 구함으로써 정확한 상관차원과 Lyapunov 차원을 추정하였다. 이트렉터 궤도의 특징을 잘 나타내는 상관차원과 Lyapunov 차원을 가지고 음성인식과 화자인식의 특징파라미터로의 효용성을 고찰하였다. 그 결과, 음성인식보다는 화자식별의 특징파라미터로타당하였으며 화자식별 특징파라미터로서는 상관차원보다는 Lyapunov 차원이 높은 화자식별 인식율을 얻을 수 있음을 알았다.

  • PDF

Text-independent Speaker Identification by Bagging VQ Classifier

  • Kyung, Youn-Jeong;Park, Bong-Dae;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권2E호
    • /
    • pp.17-24
    • /
    • 2001
  • In this paper, we propose the bootstrap and aggregating (bagging) vector quantization (VQ) classifier to improve the performance of the text-independent speaker recognition system. This method generates multiple training data sets by resampling the original training data set, constructs the corresponding VQ classifiers, and then integrates the multiple VQ classifiers into a single classifier by voting. The bagging method has been proven to greatly improve the performance of unstable classifiers. Through two different experiments, this paper shows that the VQ classifier is unstable. In one of these experiments, the bias and variance of a VQ classifier are computed with a waveform database. The variance of the VQ classifier is compared with that of the classification and regression tree (CART) classifier[1]. The variance of the VQ classifier is shown to be as large as that of the CART classifier. The other experiment involves speaker recognition. The speaker recognition rates vary significantly by the minor changes in the training data set. The speaker recognition experiments involving a closed set, text-independent and speaker identification are performed with the TIMIT database to compare the performance of the bagging VQ classifier with that of the conventional VQ classifier. The bagging VQ classifier yields improved performance over the conventional VQ classifier. It also outperforms the conventional VQ classifier in small training data set problems.

  • PDF

감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식 (Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot)

  • 김은호
    • 한국지능시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.755-759
    • /
    • 2009
  • 인간의 감정을 인식하는 기술은 인간-로봇 상호작용 분야의 중요한 연구주제 중 하나이다. 특히, 화자독립 감정인식은 음성감정인식의 상용화를 위해 꼭 필요한 중요한 이슈이다. 일반적으로, 화자독립 감정인식 시스템은 화자종속 시스템과 비교하여 감정특징 값들의 화자 그리고 성별에 따른 변화로 인하여 낮은 인식률을 보인다. 따라서 본 논문에서는 신뢰도 평가방법을 이용한 감정인식결과의 거절 방법을 사용하여 화자독립 감정인식 시스템을 일관되고 정확하게 구현할 수 있는 방법을 제시한다. 또한, 제안된 방법과 기존 방법의 비교를 통하여 제안된 방법의 효율성 및 가능성을 검증한다.

신경망과 퍼지논리를 이용한 음소인식에 관한 연구 (A Study on Phoneme Recognition using Neural Networks and Fuzzy logic)

  • 한정현;최두일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 G
    • /
    • pp.2265-2267
    • /
    • 1998
  • This paper deals with study of Fast Speaker Adaptation Type Speech Recognition, and to analyze speech signal efficiently in time domain and time-frequency domain, utilizes SCONN[1] with Speech Signal Process suffices for Fast Speaker Adaptation Type Speech Recognition, and examined Speech Recognition to investigate adaptation of system, which has speech data input after speaker dependent recognition test.

  • PDF

음성의 묵음구간 검출을 통한 DTW의 성능개선에 관한 연구 (A Study on the Improvement of DTW with Speech Silence Detection)

  • 김종국;조왕래;배명진
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.117-124
    • /
    • 2003
  • Speaker recognition is the technology that confirms the identification of speaker by using the characteristic of speech. Such technique is classified into speaker identification and speaker verification: The first method discriminates the speaker from the preregistered group and recognize the word, the second verifies the speaker who claims the identification. This method that extracts the information of speaker from the speech and confirms the individual identification becomes one of the most efficient technology as the service via telephone network is popularized. Some problems, however, must be solved for the real application as follows; The first thing is concerning that the safe method is necessary to reject the imposter because the recognition is not performed for the only preregistered customer. The second thing is about the fact that the characteristic of speech is changed as time goes by, So this fact causes the severe degradation of recognition rate and the inconvenience of users as the number of times to utter the text increases. The last thing is relating to the fact that the common characteristic among speakers causes the wrong recognition result. The silence parts being included the center of speech cause that identification rate is decreased. In this paper, to make improvement, We proposed identification rate can be improved by removing silence part before processing identification algorithm. The methods detecting speech area are zero crossing rate, energy of signal detect end point and starting point of the speech and process DTW algorithm by using two methods in this paper. As a result, the proposed method is obtained about 3% of improved recognition rate compare with the conventional methods.

  • PDF

위너필터법이 적용된 MFCC의 파라미터 추출에 기초한 화자독립 인식알고리즘 (Speaker Independent Recognition Algorithm based on Parameter Extraction by MFCC applied Wiener Filter Method)

  • 최재승
    • 한국정보통신학회논문지
    • /
    • 제21권6호
    • /
    • pp.1149-1154
    • /
    • 2017
  • 배경잡음 하에서 음성인식 시스템의 우수한 인식성능을 얻기 위해서 적절한 음성의 특징 파라미터를 선택하는 것이 매우 중요하다. 본 논문에서 사용한 특징 파라미터는 위너필터 방법이 적용된 인간의 청각 특성을 이용한 멜 주파수 켑스트럼 계수(Mel frequency cepstral coefficient, MFCC)를 사용한다. 즉, 본 논문에서 제안하는 특징 파라미터는 배경잡음을 제거한 후에 깨끗한 음성신호의 파라미터를 추출하는 새로운 방법이다. 제안한 수정된 MFCC 특징 파라미터를 다층 퍼셉트론 네트워크에 입력하여 학습시킴으로써 화자인식을 구현한다. 본 실험에서는 14차의 MFCC 특징 파라미터를 사용하여 화자독립 인식실험을 실시하였으며, 백색잡음이 혼합된 경우의 음성의 화자독립인식률은 평균 94.48%로 효과적인 결과를 구할 수 있었다. 본 논문에서 제안한 방법과 기존의 방법들을 비교하였을 때 본 논문에서 제안한 화자인식 성능이 수정된 MFCC 특징 파라미터를 사용함으로써 향상되었다.

음성에 대한 퍼지-리아프노프 차원의 제안 (The Proposal of the Fuzzed Lyapunov Dimension at Speech Signal)

  • 인준환;유병욱;유석한;정명진;김창석
    • 전자공학회논문지T
    • /
    • 제36T권4호
    • /
    • pp.30-37
    • /
    • 1999
  • 본 연구에서는 퍼지 Lyapunov차원을 제안하였다. 퍼지 Lyapunov차원이란 어트렉터의 양적 변화를 평가하는 것으로 본 논문에서는 이것에 의해 화자 인식이 평가되었다. 제안된 퍼지 Lyapunov차원은 표준 패턴 어트렉터사이의 변별 특성이 우수하고, 어트렉터에 대해서는 패턴변동을 흡수시키는 화자 인식 파라미터임을 확인하였다. 퍼지 Lyapunov차원을 평가하기 위해 화자와 표준 패턴별로 식별 오차에 따른 오인식을 추정함으로써 화자인식 파라미터의 타당성을 검토하였다. 화자인식 실험을 수행한 결과 인식율 97.0[%]을 얻었으며 퍼지 Lyapuov차원이 화자인식 파라미터로서 적합함을 확인하였다.

  • PDF

로봇 시스템에의 적용을 위한 음성 및 화자인식 알고리즘 (Implementation of the Auditory Sense for the Smart Robot: Speaker/Speech Recognition)

  • 조현;김경호;박영진
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2007년도 춘계학술대회논문집
    • /
    • pp.1074-1079
    • /
    • 2007
  • We will introduce speech/speaker recognition algorithm for the isolated word. In general case of speaker verification, Gaussian Mixture Model (GMM) is used to model the feature vectors of reference speech signals. On the other hand, Dynamic Time Warping (DTW) based template matching technique was proposed for the isolated word recognition in several years ago. We combine these two different concepts in a single method and then implement in a real time speaker/speech recognition system. Using our proposed method, it is guaranteed that a small number of reference speeches (5 or 6 times training) are enough to make reference model to satisfy 90% of recognition performance.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.