• Title/Summary/Keyword: Speech Recognition Technology

Search Result 527, Processing Time 0.025 seconds

N-Best Reranking for Improving Automatic Speech Recognition of Korean (N-Best Re-ranking에 기반한 한국어 음성 인식 성능 개선)

  • Joung Lee;Mintaek Seo;Seung-Hoon Na;Minsoo Na;Maengsik Choi;Chunghee Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.442-446
    • /
    • 2022
  • 자동 음성 인식(Automatic Speech Recognition) 혹은 Speech-to-Text(STT)는 컴퓨터가 사람이 말하는 음성 언어를 텍스트 데이터로 전환하는 일련의 처리나 기술 등을 일컫는다. 음성 인식 기술이 다양한 산업 전반에 걸쳐 적용됨에 따라 높은 수준의 정확도와 더불어 다양한 분야에 적용할 수 있는 음성 인식 기술에 대한 필요성이 점차 증대되고 있다. 다만 한국어 음성 인식의 경우 기존 선행 연구에 비해 예사말/높임말의 구분이나 어미, 조사 등의 인식에 어려움이 있어 음성 인식 결과 후처리를 통한 성능 개선이 중요하다. 따라서 본 논문에서는 N-Best 음성 인식 결과가 구성되었을 때 Re-ranking을 통해 한국어 음성 인식의 성능을 개선하는 모델을 제안한다.

  • PDF

Speech Recognition Performance Improvement using a convergence of GMM Phoneme Unit Parameter and Vocabulary Clustering (GMM 음소 단위 파라미터와 어휘 클러스터링을 융합한 음성 인식 성능 향상)

  • Oh, SangYeob
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.35-39
    • /
    • 2020
  • DNN error is small compared to the conventional speech recognition system, DNN is difficult to parallel training, often the amount of calculations, and requires a large amount of data obtained. In this paper, we generate a phoneme unit to estimate the GMM parameters with each phoneme model parameters from the GMM to solve the problem efficiently. And it suggests ways to improve performance through clustering for a specific vocabulary to effectively apply them. To this end, using three types of word speech database was to have a DB build vocabulary model, the noise processing to extract feature with Warner filters were used in the speech recognition experiments. Results using the proposed method showed a 97.9% recognition rate in speech recognition. In this paper, additional studies are needed to improve the problems of improved over fitting.

Enhancement of Ship's Wheel Order Recognition System using Speaker's Intention Predictive Parameters (화자의도예측 파라미터를 이용한 조타명령 음성인식 시스템의 개선)

  • Moon, Serng-Bae
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.32 no.5
    • /
    • pp.791-797
    • /
    • 2008
  • The officer of the deck(OOD) may sometimes have to carry out lookout as well as handling of auto pilot without a quartermaster at sea. The purpose of this paper is to develop the ship's auto pilot control module using speech recognition in order to reduce the potential risk of one man bridge system. The feature parameters predicting the OOD's intention was extracted from the sample wheel orders written in SMCP(IMO Standard Marine Communication Phrases). We designed a pre-recognition procedure which could make some candidate words using DTW(Dynamic Time Warping) algorithm, a post-recognition procedure which made a final decision from the candidate words using the feature parameters. To evaluate the effectiveness of these procedures the experiment was conducted with 500 wheel orders.

Acoustic-Phonetic Phenotypes in Pediatric Speech Disorders;An Interdisciplinary Approach

  • Bunnell, H. Timothy
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.31-36
    • /
    • 2006
  • Research in the Center for Pediatric Auditory and Speech Sciences (CPASS) is attempting to characterize or phenotype children with speech delays based on acoustic-phonetic evidence and relate those phenotypes to chromosome loci believed to be related to language and speech. To achieve this goal we have adopted a highly interdisciplinary approach that merges fields as diverse as automatic speech recognition, human genetics, neuroscience, epidemiology, and speech-language pathology. In this presentation I will trace the background of this project, and the rationale for our approach. Analyses based on a large amount of speech recorded from 18 children with speech delays will be presented to illustrate the approach we will be taking to characterize the acoustic phonetic properties of disordered speech in young children. The ultimate goal of our work is to develop non-invasive and objective measures of speech development that can be used to better identify which children with apparent speech delays are most in need of, or would receive the most benefit from the delivery of therapeutic services.

  • PDF

Noise Spectrum Estimation Using Line Spectral Frequencies for Robust Speech Recognition

  • Jang, Gil-Jin;Park, Jeong-Sik;Kim, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.179-187
    • /
    • 2012
  • This paper presents a novel method for estimating reliable noise spectral magnitude for acoustic background noise suppression where only a single microphone recording is available. The proposed method finds noise estimates from spectral magnitudes measured at line spectral frequencies (LSFs), under the observation that adjacent LSFs are near the peak frequencies and isolated LSFs are close to the relatively flattened valleys of LPC spectra. The parameters used in the proposed method are LPC coefficients, their corresponding LSFs, and the gain of LPC residual signals, so it suits well to LPC-based speech coders.

N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient (정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응)

  • Choi Joon Ki;Oh Yung-Hwan
    • MALSORI
    • /
    • no.56
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

Modeling Cross-morpheme Pronunciation Variations for Korean Large Vocabulary Continuous Speech Recognition (한국어 연속음성인식 시스템 구현을 위한 형태소 단위의 발음 변화 모델링)

  • Chung Minhwa;Lee Kyong-Nim
    • MALSORI
    • /
    • no.49
    • /
    • pp.107-121
    • /
    • 2004
  • In this paper, we describe a cross-morpheme pronunciation variation model which is especially useful for constructing morpheme-based pronunciation lexicon to improve the performance of a Korean LVCSR. There are a lot of pronunciation variations occurring at morpheme boundaries in continuous speech. Since phonemic context together with morphological category and morpheme boundary information affect Korean pronunciation variations, we have distinguished phonological rules that can be applied to phonemes in within-morpheme and cross-morpheme. The results of 33K-morpheme Korean CSR experiments show that an absolute reduction of 1.45% in WER from the baseline performance of 18.42% WER was achieved by modeling proposed pronunciation variations with a possible multiple context-dependent pronunciation lexicon.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

A Study of CHMM Reducing Computational Load Using VQ with Multiple Streams (다중 Stream 구조를 가지는 VQ를 이용하여 연산량을 개선한 CHMM에 관한 연구)

  • Bang, Young Gue;Chung, IK Joo
    • Journal of Industrial Technology
    • /
    • v.26 no.B
    • /
    • pp.233-242
    • /
    • 2006
  • Continuous, discrete and semi continuous HMM systems are used for the speech recognition. Discrete systems have the advantage of low run-time computation. However, vector quantization reduces accuracy and this can lead to poor performance. Continuous systems let us get good correctness but they need much calculation so that occasionally they are unable to be used for practice. Although there are semi-continuous systems which apply advantage of continuous and discrete systems, they also require much computation. In this paper, we proposed the way which reduces calculation for continuous systems. The proposed method has the same computational load as discrete systems but can give better recognition accuracy than discrete systems.

  • PDF

A study on Autonomous Travelling Control of Mobile Robot (이동로봇의 자율주행제어에 관한 연구)

  • Lee, Woo-Song;Shim, Hyun-Seok;Ha, Eun-Tae;Kim, Jong-Soo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.1
    • /
    • pp.10-17
    • /
    • 2015
  • We describe a research about remote control of mobile robot based on voice command in this paper. Through real-time remote control and wireless network capabilities of an unmanned remote-control experiments and Home Security / exercise with an unmanned robot, remote control and voice recognition and voice transmission are possible to transmit on a PC using a microphone to control a robot to pinpoint of the source. Speech recognition can be controlled robot by using a remote control. In this research, speech recognition speed and direction of self-driving robot were controlled by a wireless remote control in order to verify the performance of mobile robot with two drives.