• Title/Summary/Keyword: Stage Speech

Search Result 177, Processing Time 0.028 seconds

Multi-stage Speech Recognition Using Confidence Vector (신뢰도 벡터 기반의 다단계 음성인식)

  • Jeon, Hyung-Bae;Hwang, Kyu-Woong;Chung, Hoon;Kim, Seung-Hi;Park, Jun;Lee, Yun-Keun
    • MALSORI
    • /
    • no.63
    • /
    • pp.113-124
    • /
    • 2007
  • In this paper, we propose a use of confidence vector as an intermediate input feature for multi-stage based speech recognition architecture to improve recognition accuracy. A multi-stage speech recognition structure is introduced as a method to reduce the computational complexity of the decoding procedure and then accomplish faster speech recognition. Conventional multi-stage speech recognition is usually composed of three stages, acoustic search, lexical search, and acoustic re-scoring. In this paper, we focus on improving the accuracy of the lexical decoding by introducing a confidence vector as an input feature instead of phoneme which was used typically. We take experimental results on 220K Korean Point-of-Interest (POI) domain and the experimental results show that the proposed method contributes on improving accuracy.

  • PDF

Continuous Speech Recognition using Syntactic Analysis and One-Stage DMS/DP (구문 분석과 One-Stage DMS/DP를 이용한 연속음 인식)

  • 안태옥
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.201-207
    • /
    • 2004
  • This paper is a study on the recognition of continuous speech and uses a method of speech recognition using syntactic analysis and one-stage DMS/DP. In order to perform the speech recognition, first of all, we make DMS model by section division algorithm and let continuous speech data be recognized through One-stage DMS/DP method using syntactic analysis. Besides the speech recognition experiments of proposed method, we experiment the conventional one-stage DP method under the equivalent environment of data and conditions. From the recognition experiments, it is shown that Ole-stage DMS/DP using syntactic analysis is superior to conventional method.

Multi-stage Recognition for POI (다단계 인식기반의 POI 인식기 개발)

  • Jeon, Hyung-Bae;Hwang, Kyu-Woong;Chung, Hoon;Kim, Seung-Hi;Park, Jun;Lee, Yun-Keun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.131-134
    • /
    • 2007
  • We propose a multi-stage recognizer architecture that reduces the computation load and makes fast recognizer. To improve performance of baseline multi-stage recognizer, we introduced new feature. We used confidence vector for each phone segment instead of best phoneme sequence. The multi-stage recognizer with new feature has better performance on n-best and has more robustness.

  • PDF

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • MALSORI
    • /
    • no.51
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF

Robust Speech Detection Using the AURORA Front-End Noise Reduction Algorithm under Telephone Channel Environments (AURORA 잡음 처리 알고리즘을 이용한 전화망 환경에서의 강인한 음성 검출)

  • Suh Youngjoo;Ji Mikyong;Kim Hoi-Rin
    • MALSORI
    • /
    • no.48
    • /
    • pp.155-173
    • /
    • 2003
  • This paper proposes a noise reduction-based speech detection method under telephone channel environments. We adopt the AURORA front-end noise reduction algorithm based on the two-stage mel-warped Wiener filter approach as a preprocessor for the frequency domain speech detector. The speech detector utilizes mel filter-bank based useful band energies as its feature parameters. The preprocessor firstly removes the adverse noise components on the incoming noisy speech signals and the speech detector at the next stage detects proper speech regions for the noise-reduced speech signals. Experimental results show that the proposed noise reduction-based speech detection method is very effective in improving not only the performance of the speech detector but also that of the subsequent speech recognizer.

  • PDF

Two-Microphone Generalized Sidelobe Canceller with Post-Filter Based Speech Enhancement in Composite Noise

  • Park, Jinsoo;Kim, Wooil;Han, David K.;Ko, Hanseok
    • ETRI Journal
    • /
    • v.38 no.2
    • /
    • pp.366-375
    • /
    • 2016
  • This paper describes an algorithm to suppress composite noise in a two-microphone speech enhancement system for robust hands-free speech communication. The proposed algorithm has four stages. The first stage estimates the power spectral density of the residual stationary noise, which is based on the detection of nonstationary signal-dominant time-frequency bins (TFBs) at the generalized sidelobe canceller output. Second, speech-dominant TFBs are identified among the previously detected nonstationary signal-dominant TFBs, and power spectral densities of speech and residual nonstationary noise are estimated. In the final stage, the bin-wise output signal-to-noise ratio is obtained with these power estimates and a Wiener post-filter is constructed to attenuate the residual noise. Compared to the conventional beamforming and post-filter algorithms, the proposed speech enhancement algorithm shows significant performance improvement in terms of perceptual evaluation of speech quality.

A Noise Robust Speech Recognition Method Using Model Compensation Based on Speech Enhancement (음성 개선 기반의 모델 보상 기법을 이용한 강인한 잡음 음성 인식)

  • Shen, Guang-Hu;Jung, Ho-Youl;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.4
    • /
    • pp.191-199
    • /
    • 2008
  • In this paper, we propose a MWF-PMC noise processing method which enhances the input speech by using Mel-warped Wiener Filtering (MWF) at pre-processing stage and compensates the recognition model by using PMC (Parallel Model Combination) at post-processing stage for speech recognition in noisy environments. The PMC uses the residual noise extracted from the silence region of enhanced speech at pre-processing stage to compensate the clean speech model and thus this method is considered to improve the performance of speech recognition in noisy environments. For recognition experiments we dew.-sampled KLE PBW (Phoneme Balanced Words) 452 word speech data to 8kHz and made 5 different SNR levels of noisy speech, i.e., 0dB. 5dB, 10dB, 15dB and 20dB, by adding Subway, Car and Exhibition noise to clean speech. From the recognition results, we could confirm the effectiveness of the proposed MWF-PMC method by obtaining the improved recognition performances over all compared with the existing combined methods.

Design of a Korean Speech Recognition Platform (한국어 음성인식 플랫폼의 설계)

  • Kwon Oh-Wook;Kim Hoi-Rin;Yoo Changdong;Kim Bong-Wan;Lee Yong-Ju
    • MALSORI
    • /
    • no.51
    • /
    • pp.151-165
    • /
    • 2004
  • For educational and research purposes, a Korean speech recognition platform is designed. It is based on an object-oriented architecture and can be easily modified so that researchers can readily evaluate the performance of a recognition algorithm of interest. This platform will save development time for many who are interested in speech recognition. The platform includes the following modules: Noise reduction, end-point detection, met-frequency cepstral coefficient (MFCC) and perceptually linear prediction (PLP)-based feature extraction, hidden Markov model (HMM)-based acoustic modeling, n-gram language modeling, n-best search, and Korean language processing. The decoder of the platform can handle both lexical search trees for large vocabulary speech recognition and finite-state networks for small-to-medium vocabulary speech recognition. It performs word-dependent n-best search algorithm with a bigram language model in the first forward search stage and then extracts a word lattice and restores each lattice path with a trigram language model in the second stage.

  • PDF

Long-Term Follow-Up Study of Young Adults Treated for Unilateral Complete Cleft Lip, Alveolus, and Palate by a Treatment Protocol Including Two-Stage Palatoplasty: Speech Outcomes

  • Kappen, Isabelle Francisca Petronella Maria;Bittermann, Dirk;Janssen, Laura;Bittermann, Gerhard Koendert Pieter;Boonacker, Chantal;Haverkamp, Sarah;de Wilde, Hester;Van Der Heul, Marise;Specken, Tom FJMC;Koole, Ron;Kon, Moshe;Breugem, Corstiaan Cornelis;van der Molen, Aebele Barber Mink
    • Archives of Plastic Surgery
    • /
    • v.44 no.3
    • /
    • pp.202-209
    • /
    • 2017
  • Background No consensus exists on the optimal treatment protocol for orofacial clefts or the optimal timing of cleft palate closure. This study investigated factors influencing speech outcomes after two-stage palate repair in adults with a non-syndromal complete unilateral cleft lip and palate (UCLP). Methods This was a retrospective analysis of adult patients with a UCLP who underwent two-stage palate closure and were treated at our tertiary cleft centre. Patients ${\geq}17$ years of age were invited for a final speech assessment. Their medical history was obtained from their medical files, and speech outcomes were assessed by a speech pathologist during the follow-up consultation. Results Forty-eight patients were included in the analysis, with a mean age of 21 years (standard deviation, 3.4 years). Their mean age at the time of hard and soft palate closure was 3 years and 8.0 months, respectively. In 40% of the patients, a pharyngoplasty was performed. On a 5-point intelligibility scale, 84.4% received a score of 1 or 2; meaning that their speech was intelligible. We observed a significant correlation between intelligibility scores and the incidence of articulation errors (P<0.001). In total, 36% showed mild to moderate hypernasality during the speech assessment, and 11%-17% of the patients exhibited increased nasalance scores, assessed through nasometry. Conclusions The present study describes long-term speech outcomes after two-stage palatoplasty with hard palate closure at a mean age of 3 years old. We observed moderate long-term intelligibility scores, a relatively high incidence of persistent hypernasality, and a high pharyngoplasty incidence.

Speech Evaluation Variables Related to Speech Intelligibility in Children with Spastic Cerebral Palsy (경직형 뇌성마비아동의 말명료도 및 말명료도와 관련된 말 평가 변인)

  • Park, Ji-Eun;Kim, Hyang-Hee;Shin, Ji-Cheol;Choi, Hong-Shik;Sim, Hyun-Sub;Park, Eun-Sook
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.193-212
    • /
    • 2010
  • The purpose of our study was to provide effective speech evaluation items examining the variables of speech that successfully predict the speech intelligibility in CP children. The subjects were 55 children with spastic type cerebral palsy. As for the speech evaluation, we performed a speech subsystem evaluation and a speech intelligibility test. The results of the study are as follows. The evaluation task for the speech subsystems consisted of 48 task items within an observational evaluation stage and three levels of severity. The levels showed correlations with gross motor functions, fine motor functions, and age. Second, the evaluation items for the speech subsystems were rearranged into seven factors. Third, 34 out of 48 task items that positively correlated with the syllable intelligibility rating were as follows. There were four items in the observational evaluation stage. Among the nonverbal articulatory function evaluation items, there were 11 items in level one. There were 12 items in level two. In level three there were eight items. Fourth, there were 23 items among the 48 evaluation tasks that correlated with the sentence intelligibility rating. There was one item in the observational evaluation stage which was in the articulatory structure evaluation task. In level one there were six items. In level two, there were eight items. In level three, there was a total number of eight items. Fifth, there was a total number of 14 items that influenced the syllable intelligibility rating. Sixth, there was a total number of 13 items that influenced the syllable intelligibility rating. According to the results above, the variables that influenced the speech intelligibility of CP children among the articulatory function tasks were in the respiratory function task, phonatory function task, and lip and chin related tasks. We did not find any correlation for the tongue function. The results of our study could be applied to speech evaluation, setting therapy goals, and evaluating the degree of progression in children with CP. We only studied children with the spastic type of cerebral palsy, and there were a small number of severe degree CP children compared to those with a moderate degree of CP. Therefore, when evaluating children with other degrees of severity, we may have to take their characteristics more into account. Further study on speech evaluation variables in relation to the severity of the speech intelligibility and different types of cerebral palsy may be necessary.

  • PDF