• 제목/요약/키워드: Korean speech recognition

검색결과 1,112건 처리시간 0.023초

잡음음성인식을 위한 음성개선 방식들의 성능 비교 (Performance Comparison of the Speech Enhancement Methods for Noisy Speech Recognition)

  • 정용주
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.9-14
    • /
    • 2009
  • Speech enhancement methods can be generally classified into a few categories and they have been usually compared with each other in terms of speech quality. For the successful use of speech enhancement methods in speech recognition systems, performance comparisons in terms of speech recognition accuracy are necessary. In this paper, we compared the speech recognition performance of some of the representative speech enhancement algorithms which are popularly cited in the literature and used widely. We also compared the performance of speech enhancement methods with other noise robust speech recognition methods like PMC to verify the usefulness of speech enhancement approaches in noise robust speech recognition systems.

  • PDF

대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식 (Google speech recognition of an English paragraph produced by college students in clear or casual speech styles)

  • 양병곤
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현 (Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command)

  • 심병균;한성현
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

강인한 음성 인식 시스템을 사용한 감정 인식 (Emotion Recognition using Robust Speech Recognition System)

  • 김원구
    • 한국지능시스템학회논문지
    • /
    • 제18권5호
    • /
    • pp.586-591
    • /
    • 2008
  • 본 논문은 음성을 사용한 인간의 감정 인식 시스템의 성능을 향상시키기 위하여 감정 변화에 강인한 음성 인식 시스템과 결합된 감정 인식 시스템에 관하여 연구하였다. 이를 위하여 우선 다양한 감정이 포함된 음성 데이터베이스를 사용하여 감정 변화가 음성 인식 시스템의 성능에 미치는 영향에 관한 연구와 감정 변화의 영향을 적게 받는 음성 인식 시스템을 구현하였다. 감정 인식은 음성 인식의 결과에 따라 입력 문장에 대한 각각의 감정 모델을 비교하여 입력 음성에 대한 최종감정 인식을 수행한다. 실험 결과에서 강인한 음성 인식 시스템은 음성 파라메터로 RASTA 멜 켑스트럼과 델타 켑스트럼을 사용하고 신호편의 제거 방법으로 CMS를 사용한 HMM 기반의 화자독립 단어 인식기를 사용하였다. 이러한 음성 인식기와 결합된 감정 인식을 수행한 결과 감정 인식기만을 사용한 경우보다 좋은 성능을 나타내었다.

순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구 (A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models)

  • 김기석;황희영
    • 대한전기학회논문지
    • /
    • 제40권8호
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

네트워크 환경에서 서버용 음성 인식을 위한 MFCC 기반 음성 부호화기 설계 (A MFCC-based CELP Speech Coder for Server-based Speech Recognition in Network Environments)

  • 이길호;윤재삼;오유리;김홍국
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.27-43
    • /
    • 2005
  • Existing standard speech coders can provide speech communication of high quality while they degrade the performance of speech recognition systems that use the reconstructed speech by the coders. The main cause of the degradation is that the spectral envelope parameters in speech coding are optimized to speech quality rather than to the performance of speech recognition. For example, mel-frequency cepstral coefficient (MFCC) is generally known to provide better speech recognition performance than linear prediction coefficient (LPC) that is a typical parameter set in speech coding. In this paper, we propose a speech coder using MFCC instead of LPC to improve the performance of a server-based speech recognition system in network environments. However, the main drawback of using MFCC is to develop the efficient MFCC quantization with a low-bit rate. First, we explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel error. As a result, we propose a 8.7 kbps MFCC-based CELP coder. It is shown from a PESQ test that the proposed speech coder has a comparable speech quality to 8 kbps G.729 while it is shown that the performance of speech recognition using the proposed speech coder is better than that using G.729.

  • PDF

The Study on Korean Phoneme for Korean Speech Recogintion

  • Hwang, Young-Soo
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.629-632
    • /
    • 2000
  • In this paper, we studied on the phoneme classification for Korean speech recognition. In the case of making large vocabulary speech recognition system, it is better to use phoneme than syllable or word as recognition unit. And, In order to study the difference of speech recognition according to the number of phoneme as recognition unit, we used the speech toolkit of OGI in U.S.A as recognition system. The result showed that the performance of diphthong being unified was better than that of seperated diphthongs, and we required the better result when we used the biphone than when using mono-phone as recognition unit.

  • PDF

잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델 (Multi-resolution DenseNet based acoustic models for reverberant speech recognition)

  • 박순찬;정용원;김형순
    • 말소리와 음성과학
    • /
    • 제10권1호
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권10호
    • /
    • pp.1-7
    • /
    • 2021
  • 현재 대부분의 음성인식 오류 교정에 관한 연구는 영어를 기준으로 연구되어 한국어 음성인식에 대한 연구는 미비한 실정이다. 하지만 영어 음성인식에 비해 한국어 음성인식은 한국어의 언어적인 특성으로 인해 된소리, 연음 등의 발음이 있어, 비교적 많은 오류를 보이므로 한국어 음성인식에 대한 연구가 필요하다. 또한, 기존의 한국어 음성인식 연구는 주로 편집 거리 알고리즘과 음절 복원 규칙을 사용하기 때문에, 된소리와 연음의 오류 유형을 교정하기 어렵다. 본 연구에서는 된소리, 연음 등 발음으로 인한 한국어 음성인식 오류를 교정하기 위하여 LSTM을 기반으로 한 인공 신경망 모델 Sequence-to-Sequence와 Bahdanau Attention을 결합하는 문맥 기반 음성인식 후처리 모델을 제안한다. 실험 결과, 해당 모델을 사용함으로써 음성인식 성능은 된소리의 경우 64%에서 77%, 연음의 경우 74%에서 90%, 평균 69%에서 84%로 인식률이 향상되었다. 이를 바탕으로 음성인식을 기반으로 한 실제 응용 프로그램에도 본 연구에서 제안한 모델을 적용할 수 있다고 사료된다.

자동차 잡음 및 오디오 출력신호가 존재하는 자동차 실내 환경에서의 강인한 음성인식 (Robust Speech Recognition in the Car Interior Environment having Car Noise and Audio Output)

  • 박철호;배재철;배건성
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.85-96
    • /
    • 2007
  • In this paper, we carried out recognition experiments for noisy speech having various levels of car noise and output of an audio system using the speech interface. The speech interface consists of three parts: pre-processing, acoustic echo canceller, post-processing. First, a high pass filter is employed as a pre-processing part to remove some engine noises. Then, an echo canceller implemented by using an FIR-type filter with an NLMS adaptive algorithm is used to remove the music or speech coming from the audio system in a car. As a last part, the MMSE-STSA based speech enhancement method is applied to the out of the echo canceller to remove the residual noise further. For recognition experiments, we generated test signals by adding music to the car noisy speech from Aurora 2 database. The HTK-based continuous HMM system is constructed for a recognition system. Experimental results show that the proposed speech interface is very promising for robust speech recognition in a noisy car environment.

  • PDF