• Title/Summary/Keyword: Automatic Speech Recognition

Search Result 213, Processing Time 0.026 seconds

Development of an Optimized Feature Extraction Algorithm for Throat Signal Analysis

  • Jung, Young-Giu;Han, Mun-Sung;Lee, Sang-Jo
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.292-299
    • /
    • 2007
  • In this paper, we present a speech recognition system using a throat microphone. The use of this kind of microphone minimizes the impact of environmental noise. Due to the absence of high frequencies and the partial loss of formant frequencies, previous systems using throat microphones have shown a lower recognition rate than systems which use standard microphones. To develop a high performance automatic speech recognition (ASR) system using only a throat microphone, we propose two methods. First, based on Korean phonological feature theory and a detailed throat signal analysis, we show that it is possible to develop an ASR system using only a throat microphone, and propose conditions of the feature extraction algorithm. Second, we optimize the zero-crossing with peak amplitude (ZCPA) algorithm to guarantee the high performance of the ASR system using only a throat microphone. For ZCPA optimization, we propose an intensification of the formant frequencies and a selection of cochlear filters. Experimental results show that this system yields a performance improvement of about 4% and a reduction in time complexity of 25% when compared to the performance of a standard ZCPA algorithm on throat microphone signals.

  • PDF

Convolutional Neural Networks for Character-level Classification

  • Ko, Dae-Gun;Song, Su-Han;Kang, Ki-Min;Han, Seong-Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.53-59
    • /
    • 2017
  • Optical character recognition (OCR) automatically recognizes text in an image. OCR is still a challenging problem in computer vision. A successful solution to OCR has important device applications, such as text-to-speech conversion and automatic document classification. In this work, we analyze character recognition performance using the current state-of-the-art deep-learning structures. One is the AlexNet structure, another is the LeNet structure, and the other one is the SPNet structure. For this, we have built our own dataset that contains digits and upper- and lower-case characters. We experiment in the presence of salt-and-pepper noise or Gaussian noise, and report the performance comparison in terms of recognition error. Experimental results indicate by five-fold cross-validation that the SPNet structure (our approach) outperforms AlexNet and LeNet in recognition error.

New Postprocessing Methods for Rejectin Out-of-Vocabulary Words

  • Song, Myung-Gyu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3E
    • /
    • pp.19-23
    • /
    • 1997
  • The goal of postprocessing in automatic speech recognition is to improve recognition performance by utterance verification at the output of recognition stage. It is focused on the effective rejection of out-of vocabulary words based on the confidence score of hypothesized candidate word. We present two methods for computing confidence scores. Both methods are based on the distance between each observation vector and the representative code vector, which is defined by the most likely code vector at each state. While the first method employs simple time normalization, the second one uses a normalization technique based on the concept of on-line garbage mode[1]. According to the speaker independent isolated words recognition experiment with discrete density HMM, the second method outperforms both the first one and conventional likelihood ratio scoring method[2].

  • PDF

Automatic Generation of Concatenate Morphemes for Korean LVCSR (대어휘 연속음성 인식을 위한 결합형태소 자동생성)

  • 박영희;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.407-414
    • /
    • 2002
  • In this paper, we present a method that automatically generates concatenate morpheme based language models to improve the performance of Korean large vocabulary continuous speech recognition. The focus was brought into improvement against recognition errors of monosyllable morphemes that occupy 54% of the training text corpus and more frequently mis-recognized. Knowledge-based method using POS patterns has disadvantages such as the difficulty in making rules and producing many low frequency concatenate morphemes. Proposed method automatically selects morpheme-pairs from training text data based on measures such as frequency, mutual information, and unigram log likelihood. Experiment was performed using 7M-morpheme text corpus and 20K-morpheme lexicon. The frequency measure with constraint on the number of morphemes used for concatenation produces the best result of reducing monosyllables from 54% to 30%, bigram perplexity from 117.9 to 97.3. and MER from 21.3% to 17.6%.

An Implementation of the Automatic Switching System using Speech Recognition (음성 인식을 이용한 자동 교환 시스템 구현)

  • 함정표;김현아;박익현
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.935-938
    • /
    • 2000
  • 본 논문에서는 음성 인식을 이용하여 전화를 교환해주는 자동 교환 시스템을 구현하고, 성능을 평가하였다. 구현된 시스템에는 필수적인 음성인식 이외에도 DSP 진단 기능, 인식 대상 어휘의 추가 및 변경기능, 음성 수집 기능 등이 구현 되었다. SCHMM (Semi-Continuous Hidden Markov Model)을 이용한 전화망에서의 화자 독립 고립 단어 가변 어휘 인식을 대상으로 하였으며, 실시간 구현을 위하여 Texas Instrument 사의 TMS320C32를 사용하였다〔6〕. 인식 어휘는 부서명 및 인명이고 1300여 단어일 때, 인식 성능은 91.5%이다.

  • PDF

Study on Automatic Speech Recognition In Fighter Avionics (전투기 음성인식제어 기술에 관한 연구)

  • Kim, Seong-Woo;Jang, Han-Jin;Park, Jae-Seong
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1866-1867
    • /
    • 2007
  • 본 논문에서는 전투기 조종석에서의 음성인식 기술 적용과 관련하여 전투기 음성인식 시스템의 개요, 역사, 구성 및 실제 사용되고 있는 음성명령어(Command Syntax)에 대하여 알아보고, 전투기에 적용되고 있는 음성인식 시스템의 발전 추세를 분석한다.

  • PDF

A Study on Discrete Hidden Markov Model for Vibration Monitoring and Diagnosis of Turbo Machinery (터보회전기기의 진동모니터링 및 진단을 위한 이산 은닉 마르코프 모델에 관한 연구)

  • Lee, Jong-Min;Hwang, Yo-ha;Song, Chang-Seop
    • The KSFM Journal of Fluid Machinery
    • /
    • v.7 no.2 s.23
    • /
    • pp.41-49
    • /
    • 2004
  • Condition monitoring is very important in turbo machinery because single failure could cause critical damages to its plant. So, automatic fault recognition has been one of the main research topics in condition monitoring area. We have used a relatively new fault recognition method, Hidden Markov Model(HMM), for mechanical system. It has been widely used in speech recognition, however, its application to fault recognition of mechanical signal has been very limited despite its good potential. In this paper, discrete HMM(DHMM) was used to recognize the faults of rotor system to study its fault recognition ability. We set up a rotor kit under unbalance and oil whirl conditions and sampled vibration signals of two failure conditions. DHMMS of each failure condition were trained using sampled signals. Next, we changed the setup and the rotating speed of the rotor kit. We sampled vibration signals and each DHMM was applied to these sampled data. It was found that DHMMs trained by data of one rotating speed have shown good fault recognition ability in spite of lack of training data, but DHMMs trained by data of four different rotating speeds have shown better robustness.

Dialog System based on Speech Recognition for the Elderly with Dementia (음성인식에 기초한 치매환자 노인을 위한 대화시스템)

  • Kim, Sung-Il;Kim, Byoung-Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.6
    • /
    • pp.923-930
    • /
    • 2002
  • This study aims at developing dialog system to improve the quality of life of the elderly with a dementia. The proposed system mainly consists of three modules including speech recognition, automatic search of the time-sorted dialog database, and agreeable responses with the recorded voices of caregivers. For the first step, the dialog that dementia patients often utter at a nursing home is first investigated. Next, the system is organized to recognize the utterances in order to meet their requests or demands. The system is then responded with recorded voices of professional caregivers. For evaluation of the system, the comparison study was carried out when the system was introduced or not, respectively. The occupational therapists then evaluated a male subjects reaction to the system by photographing his behaviors. The evaluation results showed that the dialog system was more responsive in catering to the needs of dementia patient than professional caregivers. Moreover, the proposed system led the patient to talk more than caregivers did in mutual communication.

Automatic Generation of Pronunciation Variants for Korean Continuous Speech Recognition (한국어 연속음성 인식을 위한 발음열 자동 생성)

  • 이경님;전재훈;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.35-43
    • /
    • 2001
  • Many speech recognition systems have used pronunciation lexicon with possible multiple phonetic transcriptions for each word. The pronunciation lexicon is of often manually created. This process requires a lot of time and efforts, and furthermore, it is very difficult to maintain consistency of lexicon. To handle these problems, we present a model based on morphophon-ological analysis for automatically generating Korean pronunciation variants. By analyzing phonological variations frequently found in spoken Korean, we have derived about 700 phonemic contexts that would trigger the multilevel application of the corresponding phonological process, which consists of phonemic and allophonic rules. In generating pronunciation variants, morphological analysis is preceded to handle variations of phonological words. According to the morphological category, a set of tables reflecting phonemic context is looked up to generate pronunciation variants. Our experiments show that the proposed model produces mostly correct pronunciation variants of phonological words. Then we estimated how useful the pronunciation lexicon and training phonetic transcription using this proposed systems.

  • PDF

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.