• Title/Summary/Keyword: Automatic Speech Recognition

Search Result 215, Processing Time 0.026 seconds

Grammatical Quality Estimation for Error Correction in Automatic Speech Recognition (문법성 품질 예측에 기반한 음성 인식 오류 교정)

  • Mintaek Seo;Seung-Hoon Na;Minsoo Na;Maengsik Choi;Chunghee Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.608-612
    • /
    • 2022
  • 딥러닝의 발전 이후, 다양한 분야에서는 딥러닝을 이용해 이전에 어려웠던 작업들을 해결하여 사용자에게 편의성을 제공하고 있다. 하지만 아직 딥러닝을 통해 이상적인 서비스를 제공하는 데는 어려움이 있다. 특히, 음성 인식 작업에서 음성 양식에서 이용 방안에 대하여 다양성을 제공해주는 음성을 텍스트로 전환하는 Speech-To-Text(STT)은 문장 결과가 이상치에 달하지 못해 오류가 나타나게 된다. 본 논문에서는 STT 결과 보정을 문법 교정으로 치환하여 종단에서 올바른 토큰들을 조합하여 성능 향상을 하기 위해 각 토큰 별 품질 평가를 진행하는 모델을 한국어에서 적용하고 성능의 향상을 확인한다.

  • PDF

Automatic Pronunciation Generator Using Selection Procedure for Exceptional Pronunciation Words (예외 단어 선별 작업을 이용한 자동 발음열 생성 시스템)

  • 안주은;김순협;김선희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.248-252
    • /
    • 2004
  • Cultural, social, economic and other various environmental factors affect our language and different words and terminology are used and coined for different contexts, resulting in quantitative change of vocabulary. This paper presents an automatic pronunciation generator using selection procedure for exceptional pronunciation words from added text corpus, which reflects this dynamic nature of language. For our experiment, we used the text corpus released by ETRI for speech recognition. consisting or 53,750 sentences (740.497 Eojols), and obtained a 100% performance level of the proposed automatic pronunciation generator.

Maximum Likelihood-based Automatic Lexicon Generation for AI Assistant-based Interaction with Mobile Devices

  • Lee, Donghyun;Park, Jae-Hyun;Kim, Kwang-Ho;Park, Jeong-Sik;Kim, Ji-Hwan;Jang, Gil-Jin;Park, Unsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4264-4279
    • /
    • 2017
  • In this paper, maximum likelihood-based automatic lexicon generation using mixed-syllables is proposed for unlimited vocabulary voice interface for East Asian languages (e.g. Korean, Chinese and Japanese) in AI-assistant based interaction with mobile devices. The conventional lexicon has two inevitable problems: 1) a tedious repetition of out-of-lexicon unit additions to the lexicon, and 2) the propagation of errors during a morpheme analysis and space segmentation. The proposed method provides an automatic framework to solve the above problems. The proposed method produces a level of overall accuracy similar to one of previous methods in the presence of one out-of-lexicon word in a sentence, but the proposed method provides superior results with the absolute improvements of 1.62%, 5.58%, and 10.09% in terms of word accuracy when the number of out-of-lexicon words in a sentence was two, three and four, respectively.

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.

A study on user defined spoken wake-up word recognition system using deep neural network-hidden Markov model hybrid model (Deep neural network-hidden Markov model 하이브리드 구조의 모델을 사용한 사용자 정의 기동어 인식 시스템에 관한 연구)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.131-136
    • /
    • 2020
  • Wake Up Word (WUW) is a short utterance used to convert speech recognizer to recognition mode. The WUW defined by the user who actually use the speech recognizer is called user-defined WUW. In this paper, to recognize user-defined WUW, we construct traditional Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), Linear Discriminant Analysis (LDA)-GMM-HMM and LDA-Deep Neural Network (DNN)-HMM based system and compare their performances. Also, to improve recognition accuracy of the WUW system, a threshold method is applied to each model, which significantly reduces the error rate of the WUW recognition and the rejection failure rate of non-WUW simultaneously. For LDA-DNN-HMM system, when the WUW error rate is 9.84 %, the rejection failure rate of non-WUW is 0.0058 %, which is about 4.82 times lower than the LDA-GMM-HMM system. These results demonstrate that LDA-DNN-HMM model developed in this paper proves to be highly effective for constructing user-defined WUW recognition system.

L1-norm Regularization for State Vector Adaptation of Subspace Gaussian Mixture Model (L1-norm regularization을 통한 SGMM의 state vector 적응)

  • Goo, Jahyun;Kim, Younggwan;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.131-138
    • /
    • 2015
  • In this paper, we propose L1-norm regularization for state vector adaptation of subspace Gaussian mixture model (SGMM). When you design a speaker adaptation system with GMM-HMM acoustic model, MAP is the most typical technique to be considered. However, in MAP adaptation procedure, large number of parameters should be updated simultaneously. We can adopt sparse adaptation such as L1-norm regularization or sparse MAP to cope with that, but the performance of sparse adaptation is not good as MAP adaptation. However, SGMM does not suffer a lot from sparse adaptation as GMM-HMM because each Gaussian mean vector in SGMM is defined as a weighted sum of basis vectors, which is much robust to the fluctuation of parameters. Since there are only a few adaptation techniques appropriate for SGMM, our proposed method could be powerful especially when the number of adaptation data is limited. Experimental results show that error reduction rate of the proposed method is better than the result of MAP adaptation of SGMM, even with small adaptation data.

Study on the Situational satisfaction survey of Smart Phone based on voice recognition technology (스마트 폰 음성 인식 서비스의 상황별 만족도 조사)

  • Lee, Yoon-jeong;Kim, Seung-In
    • Journal of Digital Convergence
    • /
    • v.15 no.8
    • /
    • pp.351-357
    • /
    • 2017
  • The purpose of this study is to analyze the relationship between users' expectation and satisfaction through analyzing smart phone voice recognition service and surveying the situational satisfaction of voice recognition service. First, the concept and current status of speech recognition service were investigated through literature research, and the questionnaire survey was carried out through questionnaires based on the second and third principles. As a result, the user is most likely to use the smartphone voice recognition service when making a telephone call. Also, it was found that the service is the most used at home and the service is used most when the hand is not available. Through these results, various methods such as personalization service, condition recognition function, automatic emergency recognition, unlocking by voice could be derived. Based on this research, it is expected that it will be used effectively for improving smartphone voice recognition service and developing wearable device in the future.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

An Adaptive Utterance Verification Framework Using Minimum Verification Error Training

  • Shin, Sung-Hwan;Jung, Ho-Young;Juang, Biing-Hwang
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.423-433
    • /
    • 2011
  • This paper introduces an adaptive and integrated utterance verification (UV) framework using minimum verification error (MVE) training as a new set of solutions suitable for real applications. UV is traditionally considered an add-on procedure to automatic speech recognition (ASR) and thus treated separately from the ASR system model design. This traditional two-stage approach often fails to cope with a wide range of variations, such as a new speaker or a new environment which is not matched with the original speaker population or the original acoustic environment that the ASR system is trained on. In this paper, we propose an integrated solution to enhance the overall UV system performance in such real applications. The integration is accomplished by adapting and merging the target model for UV with the acoustic model for ASR based on the common MVE principle at each iteration in the recognition stage. The proposed iterative procedure for UV model adaptation also involves revision of the data segmentation and the decoded hypotheses. Under this new framework, remarkable enhancement in not only recognition performance, but also verification performance has been obtained.

Fast Algorithm for Recognition of Korean Isolated Words (한국어 고립단어인식을 위한 고속 알고리즘)

  • 남명우;박규홍;정상국;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.50-55
    • /
    • 2001
  • This paper presents a korean isolated words recognition algorithm which used new endpoint detection method, auditory model, 2D-DCT and new distance measure. Advantages of the proposed algorithm are simple hardware construction and fast recognition time than conventional algorithms. For comparison with conventional algorithm, we used DTW method. At result, we got similar recognition rate for speaker dependent korean isolated words and better it for speaker independent korean isolated words. And recognition time of proposed algorithm was 200 times faster than DTW algorithm. Proposed algorithm had a good result in noise environments too.

  • PDF