• 제목/요약/키워드: Automatic Speech Recognition

검색결과 212건 처리시간 0.033초

음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현 (Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command)

  • 심병균;한성현
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

AI-based language tutoring systems with end-to-end automatic speech recognition and proficiency evaluation

  • Byung Ok Kang;Hyung-Bae Jeon;Yun Kyung Lee
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.48-58
    • /
    • 2024
  • This paper presents the development of language tutoring systems for nonnative speakers by leveraging advanced end-to-end automatic speech recognition (ASR) and proficiency evaluation. Given the frequent errors in non-native speech, high-performance spontaneous speech recognition must be applied. Our systems accurately evaluate pronunciation and speaking fluency and provide feedback on errors by relying on precise transcriptions. End-to-end ASR is implemented and enhanced by using diverse non-native speaker speech data for model training. For performance enhancement, we combine semisupervised and transfer learning techniques using labeled and unlabeled speech data. Automatic proficiency evaluation is performed by a model trained to maximize the statistical correlation between the fluency score manually determined by a human expert and a calculated fluency score. We developed an English tutoring system for Korean elementary students called EBS AI Peng-Talk and a Korean tutoring system for foreigners called KSI Korean AI Tutor. Both systems were deployed by South Korean government agencies.

HMM-Based Automatic Speech Recognition using EMG Signal

  • Lee Ki-Seung
    • 대한의용생체공학회:의공학회지
    • /
    • 제27권3호
    • /
    • pp.101-109
    • /
    • 2006
  • It has been known that there is strong relationship between human voices and the movements of the articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The EMG signals were acquired from three articulatory facial muscles. Preliminary, 10 Korean digits were used as recognition variables. The various feature parameters including filter bank outputs, linear predictive coefficients and cepstrum coefficients were evaluated to find the appropriate parameters for EMG-based speech recognition. The sequence of the EMG signals for each word is modelled by a hidden Markov model (HMM) framework. A continuous word recognition approach was investigated in this work. Hence, the model for each word is obtained by concatenating the subword models and the embedded re-estimation techniques were employed in the training stage. The findings indicate that such a system may have a capacity to recognize speech signals with an accuracy of up to 90%, in case when mel-filter bank output was used as the feature parameters for recognition.

한국어 자동 발음열 생성 시스템을 위한 예외 발음 연구 (A Study on Exceptional Pronunciations For Automatic Korean Pronunciation Generator)

  • 김선희
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.57-67
    • /
    • 2003
  • This paper presents a systematic description of exceptional pronunciations for automatic Korean pronunciation generation. An automatic pronunciation generator in Korean is an essential part of a Korean speech recognition system and a TTS (Text-To-Speech) system. It is composed of a set of regular rules and an exceptional pronunciation dictionary. The exceptional pronunciation dictionary is created by extracting the words that have exceptional pronunciations, based on the characteristics of the words of exceptional pronunciation through phonological research and the systematic analysis of the entries of Korean dictionaries. Thus, the method contributes to improve performance of automatic pronunciation generator in Korean as well as the performance of speech recognition system and TTS system in Korean.

  • PDF

CDMA이동통신환경에서의 음성인식을 위한 왜곡음성신호 거부방법 (Distorted Speech Rejection For Automatic Speech Recognition under CDMA Wireless Communication)

  • 김남수;장준혁
    • 한국음향학회지
    • /
    • 제23권8호
    • /
    • pp.597-601
    • /
    • 2004
  • 본 논문에서는 CDMA이동통신 환경에서의 음성인식을 위한 왜곡음성신호의 전처리-지부방법을 소개한다. 먼저, CDMA이동통신 채널에서의 왜곡된 음성신호를 분석하고 분석된 매커니즘을 바탕으로 채널에 의해 왜곡된 음성신호를 음성의 준주기성을 바탕으로 하여 거부하는 알고리즘을 제안한다. 실험을 통해 제안된 전처리-거부방법이 적은 계산량을 가지고 음성인식에 적용되어 효과적으로 CDMA에 환경에서 채널왜곡된 음성신호를 거부-할 수 있음을 알 수 있었다.

자동 대소문자 식별을 이용한 영어 음성인식 결과의 가독성 향상 (Readability Enhancement of English Speech Recognition Output Using Automatic Capitalisation Classification)

  • 김지환
    • 대한음성학회지:말소리
    • /
    • 제61호
    • /
    • pp.101-111
    • /
    • 2007
  • A modified speech recogniser have been proposed for automatic capitalisation generation to improve the readability of English speech recognition output. In this modified speech recogniser, every word in its vocabulary is duplicated: once in a de-caplitalised form and again in the capitalised forms. In addition its language model is re-trained on mixed case texts. In order to evaluate the performance of the proposed system, experiments of automatic capitalisation generation were performed for 3 hours of Broadcast News(BN) test data using the modified HTK BN transcription system. The proposed system produced an F-measure of 0.7317 for automatic capitalisation generation with an SER of 48.55, a precision of 0.7736 and a recall of 0.6942.

  • PDF

JAVABeans Component 구조를 갖는 음성인식 시스템에서의 Voice Web Browsing에 관한 연구 (A Study on Voice Web Browsing in JAVA Beans Component Architecture Automatic Speech Recognition Application System.)

  • 장준식;윤재석
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2003년도 춘계종합학술대회
    • /
    • pp.273-276
    • /
    • 2003
  • 본 연구에서는 지금까지의 GUI 중심의 웹 어플리케이션을 VUI 중심의 웹 어플리케이션으로 구현하기 위한 음성 인식 항공 정보 시스템을 설계 구현하였다 기존의 ASP(Active Server Page)로써 구현한 윈도우 서버 기반에서 운용되는 시스템에 관한 Web 관련 ASR(Automatic Speech Recognition) 연구가 최근 상당한 연구가 이루어지고 있지만 ASP의 웹과의 제한성으로 인해 시스템의 속도면, 이식성 등에서 제약을 가져왔다. 이와 같은 제약성을 해결하기 위해 본 연구에서는 음성 정보 및 동적 VoiceXML을 구현하는 자바 빈즈(JAVA Beans) 컴포넌트 구조에 대해서 연구해 보았다. 또한 본 연구에서는 Remote An(Abstract Windows Toolkit)기술을 이용하여 GUI 및 VUI에서의 음성 및 그래픽 정보를 동시에 전달 가능하게 하는 Voice 웹 브라우징에 대해서 연구하여 보았다.

  • PDF

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • 제35권1호
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

음성인식 시스템에서의 Voice Web Browsing에 관한 연구 (A Study on Voice Web Browsing in Automatic Speech Recognition Application System)

  • 윤재석
    • 한국정보통신학회논문지
    • /
    • 제7권5호
    • /
    • pp.949-954
    • /
    • 2003
  • 본 연구에서는 지금까지의 GUI 중심의 웹 어플리케이션을 VUI 중심의 웹 어플리케이션으로 구현하기 위한 음성 인식 항공 정보 시스템을 설계 구현하였다. 이러한 ASP(Active Solver Page)로써 구현한 윈도우 서버 기반에서 운용되는 시스템에 관한 Web 관련 ASR(Automatic Speech Recognition)연구가 최근 상당한 연구가 이루어지고 있지만 ASP의 웹과의 제한성으로 인해 시스템의 속도면, 이식성 등에서 제약을 가져왔다. 이와 같은 제약성을 해결하기 위해 본 연구에서는 음성 정보 및 동적 VoiceXML을 구현하는 자바 빈즈(JAVA Beans) 컴포넌트 구조에 대해서 연구하였다. 또한 본 연구에서는 Remote An(Abstract Windows Toolkit)기술을 이용하여 GUI 및 VUI 에서의 음성 및 그래픽 정보를 동시에 전달 가능하게 하는 Voice 웹 브라우징의 가능성을 확인하였다.

Noise Robust Automatic Speech Recognition Scheme with Histogram of Oriented Gradient Features

  • Park, Taejin;Beack, SeungKwan;Lee, Taejin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권5호
    • /
    • pp.259-266
    • /
    • 2014
  • In this paper, we propose a novel technique for noise robust automatic speech recognition (ASR). The development of ASR techniques has made it possible to recognize isolated words with a near perfect word recognition rate. However, in a highly noisy environment, a distinct mismatch between the trained speech and the test data results in a significantly degraded word recognition rate (WRA). Unlike conventional ASR systems employing Mel-frequency cepstral coefficients (MFCCs) and a hidden Markov model (HMM), this study employ histogram of oriented gradient (HOG) features and a Support Vector Machine (SVM) to ASR tasks to overcome this problem. Our proposed ASR system is less vulnerable to external interference noise, and achieves a higher WRA compared to a conventional ASR system equipped with MFCCs and an HMM. The performance of our proposed ASR system was evaluated using a phonetically balanced word (PBW) set mixed with artificially added noise.