• Title/Summary/Keyword: speaker dependent system

Search Result 76, Processing Time 0.029 seconds

Achieving Faster User Enrollment for Neural Speaker Verification Systems

  • Lee, Tae-Seung;Park, Sung-Won;Lim, Sang-Seok;Hwang, Byong-Won
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.205-208
    • /
    • 2003
  • While multilayer perceptrons (MLPs) have great possibility on the application to speaker verification, they suffer from inferior learning speed. to appeal to users, the speaker verification systems based on MLPs must achieve a reasonable enrolling speed and it is thoroughly dependent on the fast learning of MLPs. To attain real-time enrollment on the systems, the previous two studies have been devoted to the problem and each satisfied the objective. In this paper the two studies are combined md applied to the systems, on the assumption that each method operates on different optimization principle. By conducting experiments using an MLP-based speaker verification system to which the combination is applied on real speech database, the feasibility of the combination is verified from the results of the experiments.

  • PDF

A Study on the Isolated word Recognition Using One-Stage DMS/DP for the Implementation of Voice Dialing System

  • Seong-Kwon Lee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1039-1045
    • /
    • 1994
  • The speech recognition systems using VQ have usually the problem decreasing recognition rate, MSVQ assigning the dissimilar vectors to a segment. In this paper, applying One-stage DMS/DP algorithm to the recognition experiments, we can solve these problems to what degree. Recognition experiment is peformed for Korean DDD area names with DMS model of 20 sections and word unit template. We carried out the experiment in speaker dependent and speaker independent, and get a recognition rates of 97.7% and 81.7% respectively.

  • PDF

Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot (감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식)

  • Kim, Eun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.755-759
    • /
    • 2009
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Especially, speaker-independent emotion recognition is a challenging issue for commercial use of speech emotion recognition systems. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition by rejection using confidence measure to make the emotion recognition system be homogeneous and accurate. From comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

A Study on the Channel Normalized Pitch Synchronous Cepstrum for Speaker Recognition (채널에 강인한 화자 인식을 위한 채널 정규화 피치 동기 켑스트럼에 관한 연구)

  • 김유진;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.61-74
    • /
    • 2004
  • In this paper, a contort- and speaker-dependent cepstrum extraction method and a channel normalization method for minimizing the loss of speaker characteristics in the cepstrum were proposed for a robust speaker recognition system over the channel. The proposed extraction method creates a cepstrum based on the pitch synchronous analysis using the inherent pitch of the speaker. Therefore, the cepstrum called the 〃pitch synchronous cepstrum〃 (PSC) represents the impulse response of the vocal tract more accurately in voiced speech. And the PSC can compensate for channel distortion because the pitch is more robust in a channel environment than the spectrum of speech. And the proposed channel normalization method, the 〃formant-broadened pitch synchronous CMS〃 (FBPSCMS), applies the Formant-Broadened CMS to the PSC and improves the accuracy of the intraframe processing. We compared the text-independent closed-set speaker identification on 56 females and 112 males using TIMIT and NTIMIT database, respectively. The results show that pitch synchronous km improves the error reduction rate by up to 7.7% in comparison with conventional short-time cepstrum and the error rates of the FBPSCMS are more stable and lower than those of pole-filtered CMS.

Faster User Enrollment for Neural Speaker Verification Systems (신경망 기반 화자증명 시스템에서 더욱 향상된 사용자 등록속도)

  • Lee, Tae-Seung;Park, Sung-Won;Hwang, Byong-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.1021-1026
    • /
    • 2003
  • While multilayer perceptrons (MLPs) have great possibility on the application to speaker verification, they suffer from inferior learning speed. To appeal to users, the speaker verification systems based on MLPs must achieve a reasonable enrolling speed and it is thoroughly dependent on the fast teaming of MLPs. To attain real-time enrollment on the systems, the previous two studies have been devoted to the problem and each satisfied the objective. In this paper, the two studies are combined and applied to the systems, on the assumption that each method operates on different optimization principle. By conducting experiments using an MLP-based speaker verification system to which the combination is applied on real speech database, the feasibility of the combination is verified from the results of the experiments.

  • PDF

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.

On-Line Linear Combination of Classifiers Based on Incremental Information in Speaker Verification

  • Huenupan, Fernando;Yoma, Nestor Becerra;Garreton, Claudio;Molina, Carlos
    • ETRI Journal
    • /
    • v.32 no.3
    • /
    • pp.395-405
    • /
    • 2010
  • A novel multiclassifier system (MCS) strategy is proposed and applied to a text-dependent speaker verification task. The presented scheme optimizes the linear combination of classifiers on an on-line basis. In contrast to ordinary MCS approaches, neither a priori distributions nor pre-tuned parameters are required. The idea is to improve the most accurate classifier by making use of the incremental information provided by the second classifier. The on-line multiclassifier optimization approach is applicable to any pattern recognition problem. The proposed method needs neither a priori distributions nor pre-estimated weights, and does not make use of any consideration about training/testing matching conditions. Results with Yoho database show that the presented approach can lead to reductions in equal error rate as high as 28%, when compared with the most accurate classifier, and 11% against a standard method for the optimization of linear combination of classifiers.

Implementation of voice Command System to control the Car Sunroof (자동차 선루프 제어용 음성 명령 시스템 구현)

  • 정윤식;임재열
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1095-1098
    • /
    • 1999
  • We have developed a speaker dependent voice command system(VCS) to control the sunroof in the car using RSC-164 VRP(Voice Recognition Processor). VCS consists of control circuits, microphone, speaker and user switch box. The control circuits include RSC-164, input audio preamplifier, memory devices, and relay circuit for sunroof control. It is designed robustly in various car noisy situations like audio volume, air conditioner, and incoming noise when window or sunroof opened. Each two users can control the car sunroof using seven voice commands on the Super TVS model and five voice commands on the Onyx model. It works well when we drive the car at over 100 km/h with the sunroof opened.

  • PDF

A Study on the Speech Recognition for Commands of Ticketing Machine using CHMM (CHMM을 이용한 발매기 명령어의 음성인식에 관한 연구)

  • Kim, Beom-Seung;Kim, Soon-Hyob
    • Journal of the Korean Society for Railway
    • /
    • v.12 no.2
    • /
    • pp.285-290
    • /
    • 2009
  • This paper implemented a Speech Recognition System in order to recognize Commands of Ticketing Machine (314 station-names) at real-time using Continuous Hidden Markov Model. Used 39 MFCC at feature vectors and For the improvement of recognition rate composed 895 tied-state triphone models. System performance valuation result of the multi-speaker-dependent recognition rate and the multi-speaker-independent recognition rate is 99.24% and 98.02% respectively. In the noisy environment the recognition rate is 93.91%.