• 제목/요약/키워드: automatic speech recognition

검색결과 212건 처리시간 0.027초

한국 표준어 연속음성에서의 억양구와 강세구 자동 검출 (Automatic Detection of Intonational and Accentual Phrases in Korean Standard Continuous Speech)

  • 이기영;송민석
    • 음성과학
    • /
    • 제7권2호
    • /
    • pp.209-224
    • /
    • 2000
  • This paper proposes an automatic detection method of intonational and accentual phrases in Korean standard continuous speech. We use the pause over 150 msec for detecting intonational phrases, and extract accentual phrases from the intonational phrases by analyzing syllables and pitch contours. The speech data for the experiment are composed of seven male voices and two female voices which read the texts of the fable 'the ant and the grasshopper' and a newspaper article 'manmulsang' in normal speed and in Korean standard variation. The results of the experiment shows that the detection rate of intonational phrases is 95% on the average and that of accentual phrases is 73%. This detection rate implies that we can segment the continuous speech into smaller units(i.e. prosodic phrases) by using the prosodic information and so the objects of speech recognition can narrow down to words or phrases in continuous speech.

  • PDF

음성인식기를 이용한 한국인의 외국어 발화오류 자동 검출 (Automatic Detection of Mispronunciation Using Phoneme Recognition For Foreign Language Instruction)

  • 권철홍;강효원;이상필
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.127-139
    • /
    • 2003
  • An automatic pronunciation correction system provides learners with correction guidelines for each mispronunciation. In this paper we propose an HMM based speech recognizer which automatically classifies pronunciation errors when Korean speak Japanese. For this purpose we also develop phoneme recognizers for Korean and Japanese. Experimental results show that the machine scores of the proposed recognizer correlate with expert ratings well.

  • PDF

임베디드 디바이스에서 음성 인식 알고리듬 구현을 위한 부동 소수점 연산의 고정 소수점 연산 변환 기법 (Automatic Floating-Point to Fixed-Point Conversion for Speech Recognition in Embedded Device)

  • 윤성락;유창동
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.305-306
    • /
    • 2007
  • This paper proposes an automatic conversion method from floating-point value computations to fixed-point value computations for implementing automatic speech recognition (ASR) algorithms in embedded device.

  • PDF

인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법 (Automatic Human Emotion Recognition from Speech and Face Display - A New Approach)

  • 딩�E령;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(B)
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

FPGA-Based Hardware Accelerator for Feature Extraction in Automatic Speech Recognition

  • Choo, Chang;Chang, Young-Uk;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • 제13권3호
    • /
    • pp.145-151
    • /
    • 2015
  • We describe in this paper a hardware-based improvement scheme of a real-time automatic speech recognition (ASR) system with respect to speed by designing a parallel feature extraction algorithm on a Field-Programmable Gate Array (FPGA). A computationally intensive block in the algorithm is identified implemented in hardware logic on the FPGA. One such block is mel-frequency cepstrum coefficient (MFCC) algorithm used for feature extraction process. We demonstrate that the FPGA platform may perform efficient feature extraction computation in the speech recognition system as compared to the generalpurpose CPU including the ARM processor. The Xilinx Zynq-7000 System on Chip (SoC) platform is used for the MFCC implementation. From this implementation described in this paper, we confirmed that the FPGA platform is approximately 500× faster than a sequential CPU implementation and 60× faster than a sequential ARM implementation. We thus verified that a parallelized and optimized MFCC architecture on the FPGA platform may significantly improve the execution time of an ASR system, compared to the CPU and ARM platforms.

원어민 및 외국인 화자의 음성인식을 위한 심층 신경망 기반 음향모델링 (DNN-based acoustic modeling for speech recognition of native and foreign speakers)

  • 강병옥;권오욱
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.95-101
    • /
    • 2017
  • This paper proposes a new method to train Deep Neural Network (DNN)-based acoustic models for speech recognition of native and foreign speakers. The proposed method consists of determining multi-set state clusters with various acoustic properties, training a DNN-based acoustic model, and recognizing speech based on the model. In the proposed method, hidden nodes of DNN are shared, but output nodes are separated to accommodate different acoustic properties for native and foreign speech. In an English speech recognition task for speakers of Korean and English respectively, the proposed method is shown to slightly improve recognition accuracy compared to the conventional multi-condition training method.

Hyperparameter experiments on end-to-end automatic speech recognition

  • Yang, Hyungwon;Nam, Hosung
    • 말소리와 음성과학
    • /
    • 제13권1호
    • /
    • pp.45-51
    • /
    • 2021
  • End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that "num blocks" and "linear units" hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as "num blocks" and "linear units" hyperparameters' values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.

자동차 잡음환경에서의 음성인식에 적용된 두 종류의 일반화된 감마분포 기반의 음성추정 알고리즘 비교 (Comparison of Two Speech Estimation Algorithms Based on Generalized-Gamma Distribution Applied to Speech Recognition in Car Noisy Environment)

  • 김형국;이진호
    • 한국ITS학회 논문지
    • /
    • 제8권4호
    • /
    • pp.28-32
    • /
    • 2009
  • 본 논문은 DFT기반의 단일마이크 음성향상 방식에 적용된 두 종류의 generalized-Gamma 분포기반의 음성추정 알고리즘을 비교한다. 음성향상 방식으로서는 최소잡음성분에 의한 회귀적인 평균스펙트럼 값으로부터 유도되는 잡음 추정을 각각 $\kappa$=1인 경우와 $\kappa$=2인 경우의 Gamma 분포를 이용한 음성추정 기법에 결합하여 음질을 향상시켰다. 각 방식에 의해 향상된 음성신호를 자동차 환경에서의 음성인식에 적용하여 그 성능을 비교하였다.

  • PDF

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권3호
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.