• Title/Summary/Keyword: Speech Training

Search Result 579, Processing Time 0.021 seconds

Performance Comparison of Multiple-Model Speech Recognizer with Multi-Style Training Method Under Noisy Environments (잡음 환경하에서의 다 모델 기반인식기와 다 스타일 학습방법과의 성능비교)

  • Yoon, Jang-Hyuk;Chung, Young-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2E
    • /
    • pp.100-106
    • /
    • 2010
  • Multiple-model speech recognizer has been shown to be quite successful in noisy speech recognition. However, its performance has usually been tested using the general speech front-ends which do not incorporate any noise adaptive algorithms. For the accurate evaluation of the effectiveness of the multiple-model frame in noisy speech recognition, we used the state-of-the-art front-ends and compared its performance with the well-known multi-style training method. In addition, we improved the multiple-model speech recognizer by employing N-best reference HMMs for interpolation and using multiple SNR levels for training each of the reference HMM.

The Effect of Respiration and Articulator Training Programs on Basic Ability of Speech Production in Cerebral Palsy Children (호흡 및 조음기관 훈련 프로그램이 뇌성마비아동의 말 산출 기초능력에 미치는 효과)

  • Lee, Gum-Suk;Yoo, Jae-Yeon
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.103-116
    • /
    • 2008
  • Cerebral palsy children represent abnormal vocalization pattern caused by respiration problem and paralyzed oral motor muscle that are the basics of speech production. Thus, this study examined the effect of respiration and articulator training programs on the basic ability of speech production in CP children. The subjects of this study were 4 children with 3 of spastic CP and 1 of ataxia CP. The respiration and articulator program was conducted in 30 sessions for 30 minutes each. Pre-test was administered twice before the program, ongoing test was administered every 5 session during the period of experiment, and post-test was administered twice. The program included speech production such as respiration training, lips, jaw, cheek, and tongue exercise, and velopharyngeal training, and related articulator training. The following results were obtained. First, all subject children were less than 5 seconds in maximum phonation time before the experiment and 2 were improved by more than 4$\sim$5 seconds during the experiment, but 2 had relatively low rising width. Second, while children with less than 30dB before the experiment became bigger in strength during the experiment, children with more than 35dB before the experiment showed a minor change. Subject child 4 had lower vocal strength in the post-test period. Finally, although each subject had individual difference in syllable diadochokinetic ability, the function was improved and the number of repetition in one respiration was also increased.

  • PDF

Emotion Robust Speech Recognition using Speech Transformation (음성 변환을 사용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.683-687
    • /
    • 2010
  • This paper studied some methods which use frequency warping method that is the one of the speech transformation method to develope the robust speech recognition system for the emotional variation. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions and it is observed that speech spectrum is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, new training method that uses frequency warping in training process is presented to reduce the effect of emotional variation and the speech recognition system based on vocal tract length normalization method is developed to be compared with proposed system. Experimental results from the isolated word recognition using HMM showed that new training method reduced the error rate of the conventional recognition system using speech signal containing various emotions.

On Speaker Adaptations with Sparse Training Data for Improved Speaker Verification

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.31-37
    • /
    • 2000
  • This paper concerns effective speaker adaptation methods to solve the over-training problem in speaker verification, which frequently occurs when modeling a speaker with sparse training data. While various speaker adaptations have already been applied to speech recognition, these methods have not yet been formally considered in speaker verification. This paper proposes speaker adaptation methods using a combination of MAP and MLLR adaptations, which are successfully used in speech recognition, and applies to speaker verification. Experimental results show that the speaker verification system using a weighted MAP and MLLR adaptation outperforms that of the conventional speaker models without adaptation by a factor of up to 5 times. From these results, we show that the speaker adaptation method achieves significantly better performance even when only small training data is available for speaker verification.

  • PDF

Performance Improvement of SPLICE-based Noise Compensation for Robust Speech Recognition (강인한 음성인식을 위한 SPLICE 기반 잡음 보상의 성능향상)

  • Kim, Hyung-Soon;Kim, Doo-Hee
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.263-277
    • /
    • 2003
  • One of major problems in speech recognition is performance degradation due to the mismatch between the training and test environments. Recently, Stereo-based Piecewise LInear Compensation for Environments (SPLICE), which is frame-based bias removal algorithm for cepstral enhancement using stereo training data and noisy speech model as a mixture of Gaussians, was proposed and showed good performance in noisy environments. In this paper, we propose several methods to improve the conventional SPLICE. First we apply Cepstral Mean Subtraction (CMS) as a preprocessor to SPLICE, instead of applying it as a postprocessor. Secondly, to compensate residual distortion after SPLICE processing, two-stage SPLICE is proposed. Thirdly we employ phonetic information for training SPLICE model. According to experiments on the Aurora 2 database, proposed method outperformed the conventional SPLICE and we achieved a 50% decrease in word error rate over the Aurora baseline system.

  • PDF

Maximum Likelihood Training and Adaptation of Embedded Speech Recognizers for Mobile Environments

  • Cho, Young-Kyu;Yook, Dong-Suk
    • ETRI Journal
    • /
    • v.32 no.1
    • /
    • pp.160-162
    • /
    • 2010
  • For the acoustic models of embedded speech recognition systems, hidden Markov models (HMMs) are usually quantized and the original full space distributions are represented by combinations of a few quantized distribution prototypes. We propose a maximum likelihood objective function to train the quantized distribution prototypes. The experimental results show that the new training algorithm and the link structure adaptation scheme for the quantized HMMs reduce the word recognition error rate by 20.0%.

Accurate Speech Detection based on Sub-band Selection for Robust Keyword Recognition (강인한 핵심어 인식을 위해 유용한 주파수 대역을 이용한 음성 검출기)

  • Ji Mikyong;Kim Hoirin
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.183-186
    • /
    • 2002
  • The speech detection is one of the important problems in real-time speech recognition. The accurate detection of speech boundaries is crucial to the performance of speech recognizer. In this paper, we propose a speech detector based on Mel-band selection through training. In order to show the excellence of the proposed algorithm, we compare it with a conventional one, so called, EPD-VAA (EndPoint Detector based on Voice Activity Detection). The proposed speech detector is trained in order to better extract keyword speech than other speech. EPD-VAA usually works well in high SNR but it doesn't work well any more in low SNR. But the proposed algorithm pre-selects useful bands through keyword training and decides the speech boundary according to the energy level of the sub-bands that is previously selected. The experimental result shows that the proposed algorithm outperforms the EPD-VAA.

  • PDF

A Study on the Influence of English Vowel Pronunciation Training on Word Initial Stop Pronunciation of Korean English Learners (영어 모음 발음 교육이 한국인 학습자의 어두 폐쇄음 발화에 미치는 영향에 대한 연구)

  • Km, Ji-Eun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.31-38
    • /
    • 2013
  • This study investigated the influence of English vowel pronunciation training to English word-initial stop pronunciation. For that purpose, VOT values of English stops produced by twenty Korean English learners(five Youngnam dialect male speakers, five Youngnam dialect female speakers, five Kangwon dialect male speakers, and five Kangwon dialect female speakers) were measured using the Speech Analyzer and their post-training production was compared with their pre-training production. The result shows that post-training VOT values of voiced stops became closer to those of native English speakers in all four groups. Hence, it can be inferred that vowel pronunciation training is effective for correcting pronunciation of voiced vowels by analyzing the change of the quality of following vowels(especially low vowels) and the degree of giving stress.

A Training Method for Emotionally Robust Speech Recognition using Frequency Warping (주파수 와핑을 이용한 감정에 강인한 음성 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.528-533
    • /
    • 2010
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variation on the speech signal and the speech recognition system were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, a training method that cover the speech variations is proposed to develop the emotionally robust speech recognition system. Experimental results from the isolated word recognition using HMM showed that propose method reduced the error rate of the conventional recognition system by 28.4% when emotional test data was used.

Hyperparameter experiments on end-to-end automatic speech recognition

  • Yang, Hyungwon;Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.45-51
    • /
    • 2021
  • End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that "num blocks" and "linear units" hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as "num blocks" and "linear units" hyperparameters' values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.