• Title/Summary/Keyword: Speech Training

Search Result 579, Processing Time 0.024 seconds

The effects of repeated speech training using speech cues on the percentage of correct consonants and speech intelligibility in children with cerebral palsy: A single-subject design research (Speech cues를 이용한 반복훈련이 뇌성마비 아동의 자음정확도 및 말명료도에 미치는 영향: 단일대상연구)

  • Seo, Saehee;Jeong, Pilyeon;Sim, Hyunsub
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.79-90
    • /
    • 2021
  • This single-subject study examined the effects of repetitive speech training at the word and sentence levels using speech cues on the percentage of correct consonants (PCC) and speech intelligibility of children with cerebral palsy (CP). Three children aged between 5-8 years with a history of CP participated in the study. Thirty-minute intervention sessions were provided four times a week for four weeks. The intervention included repeated training of words and sentences containing target phonemes using two instructions of speech cues, "big mouse" and "strong voice". First, the children improved their average PCC and speech intelligibility, but an effect size analysis indicated that the effect was different for each child, and the effect size for speech intelligibility was higher than for PCC. Second, the intervention effect was generalized to untrained words and sentences. Third, the maintenance effects of PCC and speech intelligibility were very high. These findings suggests that repeated speech training using speech cues is an intervention technique that can help improve PCC and speech intelligibility in children with CP.

Vector Quantization by N-ary Search of a Codebook (코우드북의 절충탐색에 의한 벡터양자화)

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.143-148
    • /
    • 2001
  • We propose a new scheme for VQ codebook search. The procedure is in between the binary-tree-search and full-search and thus might be called N-ary search of a codebook. Through the experiment performed on 7200 frames spoken by 25 speakers, we confirmed that the best codewords as good as by the full-search were obtained at moderate time consumption comparable to the binary-tree-search. In application to speech recognition by HMM/VQ with Bakis model, where appearance of a specific codeword is essential in the parameter training phase, the method proposed here is expected to provide an efficient training procedure.

  • PDF

Harmonics-based Spectral Subtraction and Feature Vector Normalization for Robust Speech Recognition

  • Beh, Joung-Hoon;Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.7-20
    • /
    • 2004
  • In this paper, we propose a two-step noise compensation algorithm in feature extraction for achieving robust speech recognition. The proposed method frees us from requiring a priori information on noisy environments and is simple to implement. First, in frequency domain, the Harmonics-based Spectral Subtraction (HSS) is applied so that it reduces the additive background noise and makes the shape of harmonics in speech spectrum more pronounced. We then apply a judiciously weighted variance Feature Vector Normalization (FVN) to compensate for both the channel distortion and additive noise. The weighted variance FVN compensates for the variance mismatch in both the speech and the non-speech regions respectively. Representative performance evaluation using Aurora 2 database shows that the proposed method yields 27.18% relative improvement in accuracy under a multi-noise training task and 57.94% relative improvement under a clean training task.

  • PDF

Low Frequency Perception of Rhythm and Intonation Speech Patterns by Normal Hearing Adults

  • Kim, Young-Sun;Asp, Carl-W.
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.7-16
    • /
    • 2002
  • This study tested normal hearing adults' auditory perception of rhythm and intonation patterns, with low-frequency speech energy. The results showed that the narrow-band low-frequency zones of 125, 250, or 500 Hz provided the same important rhythm and intonation cues as did the wide-band condition. This suggested that an auditory training strategy that uses low-frequency filters would be effective for structuring or re-structuring the perception of rhythm and intonation patterns. These filters force the client to focus on these patterns, because the speech intelligibility is drastically reduced. This strategy can be used with both normal-hearing and hearing impaired children and adults with poor listening skills, and possibly poor speech intelligibility.

  • PDF

DNN-based acoustic modeling for speech recognition of native and foreign speakers (원어민 및 외국인 화자의 음성인식을 위한 심층 신경망 기반 음향모델링)

  • Kang, Byung Ok;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.95-101
    • /
    • 2017
  • This paper proposes a new method to train Deep Neural Network (DNN)-based acoustic models for speech recognition of native and foreign speakers. The proposed method consists of determining multi-set state clusters with various acoustic properties, training a DNN-based acoustic model, and recognizing speech based on the model. In the proposed method, hidden nodes of DNN are shared, but output nodes are separated to accommodate different acoustic properties for native and foreign speech. In an English speech recognition task for speakers of Korean and English respectively, the proposed method is shown to slightly improve recognition accuracy compared to the conventional multi-condition training method.

ETRI small-sized dialog style TTS system (ETRI 소용량 대화체 음성합성시스템)

  • Kim, Jong-Jin;Kim, Jeong-Se;Kim, Sang-Hun;Park, Jun;Lee, Yun-Keun;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.217-220
    • /
    • 2007
  • This study outlines a small-sized dialog style ETRI Korean TTS system which applies a HMM based speech synthesis techniques. In order to build the VoiceFont, dialog-style 500 sentences were used in training HMM. And the context information about phonemes, syllables, words, phrases and sentence were extracted fully automatically to build context-dependent HMM. In training the acoustic model, acoustic features such as Mel-cepstrums, logF0 and its delta, delta-delta were used. The size of the VoiceFont which was built through the training is 0.93Mb. The developed HMM-based TTS system were installed on the ARM720T processor which operates 60MHz clocks/second. To reduce computation time, the MLSA inverse filtering module is implemented with Assembly language. The speed of the fully implemented system is the 1.73 times faster than real time.

  • PDF

Discriminative Training of Stochastic Segment Model Based on HMM Segmentation for Continuous Speech Recognition

  • Chung, Yong-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4E
    • /
    • pp.21-27
    • /
    • 1996
  • In this paper, we propose a discriminative training algorithm for the stochastic segment model (SSM) in continuous speech recognition. As the SSM is usually trained by maximum likelihood estimation (MLE), a discriminative training algorithm is required to improve the recognition performance. Since the SSM does not assume the conditional independence of observation sequence as is done in hidden Markov models (HMMs), the search space for decoding an unknown input utterance is increased considerably. To reduce the computational complexity and starch space amount in an iterative training algorithm for discriminative SSMs, a hybrid architecture of SSMs and HMMs is programming using HMMs. Given the segment boundaries, the parameters of the SSM are discriminatively trained by the minimum error classification criterion based on a generalized probabilistic descent (GPD) method. With the discriminative training of the SSM, the word error rate is reduced by 17% compared with the MLE-trained SSM in speaker-independent continuous speech recognition.

  • PDF

Robust Speech Recognition Using Weighted Auto-Regressive Moving Average Filter (가중 ARMA 필터를 이용한 강인한 음성인식)

  • Ban, Sung-Min;Kim, Hyung-Soon
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.145-151
    • /
    • 2010
  • In this paper, a robust feature compensation method is proposed for improving the performance of speech recognition. The proposed method is incorporated into the auto-regressive moving average (ARMA) based feature compensation. We employ variable weights for the ARMA filter according to the degree of speech activity, and pass the normalized cepstral sequence through the weighted ARMA filter. Additionally when normalizing the cepstral sequences in training, the cepstral means and variances are estimated from total training utterances. Experimental results show the proposed method significantly improves the speech recognition performance in the noisy and reverberant environments.

  • PDF

Improved speech emotion recognition using histogram equalization and data augmentation techniques (히스토그램 등화와 데이터 증강 기법을 이용한 개선된 음성 감정 인식)

  • Heo, Woon-Haeng;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.77-83
    • /
    • 2017
  • We propose a new method to reduce emotion recognition errors caused by variation in speaker characteristics and speech rate. Firstly, for reducing variation in speaker characteristics, we adjust features from a test speaker to fit the distribution of all training data by using the histogram equalization (HE) algorithm. Secondly, for dealing with variation in speech rate, we augment the training data with speech generated in various speech rates. In computer experiments using EMO-DB, KRN-DB and eNTERFACE-DB, the proposed method is shown to improve weighted accuracy relatively by 34.7%, 23.7% and 28.1%, respectively.

Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization (효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현)

  • Song, Hwa Jeon;Jung, Ho Young;Park, Jeon Gue
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.