• 제목/요약/키워드: Automatic Speech Recognition

검색결과 213건 처리시간 0.025초

한국인의 외국어 발화오류검출 음성인식기에서 청취판단과 상관관계가 높은 기계 스코어링 기법 (Machine Scoring Methods Highly-correlated with Human Ratings in Speech Recognizer Detecting Mispronunciation of Foreign Language)

  • 배민영;권철홍
    • 음성과학
    • /
    • 제11권2호
    • /
    • pp.217-226
    • /
    • 2004
  • An automatic pronunciation correction system provides users with correction guidelines for each pronunciation error. For this purpose, we develop a speech recognition system which automatically classifies pronunciation errors when Koreans speak a foreign language. In this paper, we propose a machine scoring method for automatic assessment of pronunciation quality by the speech recognizer. Scores obtained from an expert human listener are used as the reference to evaluate the different machine scores and to provide targets when training some of algorithms. We use a log-likelihood score and a normalized log-likelihood score as machine scoring methods. Experimental results show that the normalized log-likelihood score had higher correlation with human scores than that obtained using the log-likelihood score.

  • PDF

Possibililty of the Rough Set Approach in Phonetic Distinctions

  • Lee, Jae-Ik;Kim, Jeong-Kuk;Jo, Heung-Kuk;Suh, Kap-Sun
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1996년도 영남지부 학술발표회 논문집 Acoustic Society of Korean Youngnam Chapter Symposium Proceedings
    • /
    • pp.66-69
    • /
    • 1996
  • The goal of automatic speech recognition is to study techniques and systems that enable agents such that computers and robots to accept speech input. However, this paper does not provide a concrete technology in speech recognition but propose a possible mathematical tools to be employed in that area. We introduce rough set theory and suggest the possibility of the rough set approach in phonetic distinctions.

  • PDF

고속 발화음에 대한 음성 인식 향상 (Improvements on Speech Recognition for Fast Speech)

  • 이기승
    • 한국음향학회지
    • /
    • 제25권2호
    • /
    • pp.88-95
    • /
    • 2006
  • 본 논문에서는 대화체 음성에 대한 음성 인식의 성능을 향상시키기 위한 방법으로, 고속 발화음에 대해 강인한 음성 인식 방법을 제안하고 성능을 평가하였다. 제안된 기법은 입력된 음성의 속도를 정량화하여 나타내기 위한 부가적인 음성 인식 과정이 필요치 않으며, 특정 대역내의 에너지 분포를 이용하여 모음 구간을 판정하고, 단위 시간당 모음의 개수를 구하여 음성의 속도를 측정하였다. 빠른 발성음에 대한 음성 인식의 성능을 향상시키기 위해, 기존의 방법은 표준 음소 길이와 측정된 음소 길이간의 비율을 이용하여 특징 벡터를 시간축으로 확장하였다. 제안된 방법에서는 발성 속도에 따라 음성을 분류하고, 분류된 음성에 대해 서로 다른 시간축 확장 비율을 정하도록 하였다. 여기서 분류에 필요한 문턱치들과 시간축 확장 비율들은 최대 우도 방법을 이용하여 구하였다. 10자리 이동 전화 번호에 대한 음성 인식의 실험 결과, 제안된 기법에 의해 전체적으로 $17.8\%$ 오류율이 감소되는 것을 확인할 수 있었다.

On-Line Blind Channel Normalization for Noise-Robust Speech Recognition

  • Jung, Ho-Young
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권3호
    • /
    • pp.143-151
    • /
    • 2012
  • A new data-driven method for the design of a blind modulation frequency filter that suppresses the slow-varying noise components is proposed. The proposed method is based on the temporal local decorrelation of the feature vector sequence, and is done on an utterance-by-utterance basis. Although the conventional modulation frequency filtering approaches the same form regardless of the task and environment conditions, the proposed method can provide an adaptive modulation frequency filter that outperforms conventional methods for each utterance. In addition, the method ultimately performs channel normalization in a feature domain with applications to log-spectral parameters. The performance was evaluated by speaker-independent isolated-word recognition experiments under additive noise environments. The proposed method achieved outstanding improvement for speech recognition in environments with significant noise and was also effective in a range of feature representations.

  • PDF

한국어 음성인식을 위한 음성학 기반의 유사음소단위 집합 설계 (A Phonetics Based Design of PLU Sets for Korean Speech Recognition)

  • 홍혜진;김선희;정민화
    • 대한음성학회지:말소리
    • /
    • 제65호
    • /
    • pp.105-124
    • /
    • 2008
  • This paper presents the effects of different phone-like-unit (PLU) sets in order to propose an optimal PLU set for the performance improvement of Korean automatic speech recognition (ASR) systems. The examination of 9 currently used PLU sets indicates that most of them include a selection of allophones without any sufficient phonetic base. In this paper, a total of 34 PLU sets are designed based on Korean phonetic characteristics arid the effects of each PLU set are evaluated through experiments. The results show that the accuracy rate of each phone is influenced by different phonetic constraint(s) which determine(s) the PLU sets, and that an optimal PLU set can be anticipated through the phonetic analysis of the given speech data.

  • PDF

Spectral Subtraction Using Spectral Harmonics for Robust Speech Recognition in Car Environments

  • Beh, Jounghoon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제22권2E호
    • /
    • pp.62-68
    • /
    • 2003
  • This paper addresses a novel noise-compensation scheme to solve the mismatch problem between training and testing condition for the automatic speech recognition (ASR) system, specifically in car environment. The conventional spectral subtraction schemes rely on the signal-to-noise ratio (SNR) such that attenuation is imposed on that part of the spectrum that appears to have low SNR, and accentuation is made on that part of high SNR. However, these schemes are based on the postulation that the power spectrum of noise is in general at the lower level in magnitude than that of speech. Therefore, while such postulation is adequate for high SNR environment, it is grossly inadequate for low SNR scenarios such as that of car environment. This paper proposes an efficient spectral subtraction scheme focused specifically to low SNR noisy environment by extracting harmonics distinctively in speech spectrum. Representative experiments confirm the superior performance of the proposed method over conventional methods. The experiments are conducted using car noise-corrupted utterances of Aurora2 corpus.

Comparison of Phone Boundary Alignment between Handlabels and Autolabels

  • Jang, Tae-Yeoub;Chung, Hyun-Song
    • 음성과학
    • /
    • 제10권1호
    • /
    • pp.27-39
    • /
    • 2003
  • This study attempts to verify the reliability of automatically generated segment labels as compared to those obtained by conventional labelling by hand. First of all, an autolabeller is constructed using the standard HMM speech recognition technique. For evaluation, we compare the automatically generated labels with manually annotated labels for the same speech data. The comparison is performed by calculating the temporal difference between an autolabel boundary and its corresponding hand label boundary. When the mismatched duration between two labels falls within 10 msec, we consider the autolabel as correct. The results suggest that overall 78% of autolabels are correctly obtained. It is found that the boundary of obstruents is better aligned than that of sonorants and vowels. In case of stop sound classes, strong stops in manner-of-articulation wise and velar stops in place-of-articulation wise show better performance in boundary alignment. The result suggests that more phone-specific consideration is necessary to improve autosegmentation performance.

  • PDF

Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets

  • Iswarya, P.;Radha, V.
    • Journal of Information Processing Systems
    • /
    • 제13권5호
    • /
    • pp.1135-1148
    • /
    • 2017
  • Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

벌크 지표의 신경망 학습에 기반한 한국어 모음 'ㅡ'의 음성 인식 (Speech Recognition of the Korean Vowel 'ㅡ' based on Neural Network Learning of Bulk Indicators)

  • 이재원
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권11호
    • /
    • pp.617-624
    • /
    • 2017
  • 음성 인식은 HCI 분야에서 널리 사용되는 기술 중 하나이다. 가정 자동화, 자동 통역, 차량 내비게이션 등 음성 인식 기술이 적용될 수 있는 많은 응용들이 현재 개발되고 있다. 또한, 모바일 환경에서 작동 가능한 음성 인식 시스템에 대한 수요도 급속히 증대되고 있다. 본 논문은 한국어 음성 인식 시스템의 일부로서, 한국어 모음 'ㅡ'를 빠르게 인식할 수 있는 방안을 제시한다. 제안하는 방식은 주파수 영역 대신, 시간 영역에서 계산되는 지표인 벌크 지표를 사용하므로, 인식을 위한 계산 비용을 절감할 수 있다. 모음 'ㅡ'의 전형적인 시퀀스 패턴들을 표현하는 벌크 지표들에 대한 신경망 학습을 수행하며, 최종적인 인식을 위해 학습된 신경망을 사용한다. 실험 결과를 통해, 제안하는 방식이 모음 'ㅡ'를 88.7%의 정확도로 인식할 수 있음을 확인하였고, 인식 속도는 어절 당 0.74msec이다.

Three-Stage Framework for Unsupervised Acoustic Modeling Using Untranscribed Spoken Content

  • Zgank, Andrej
    • ETRI Journal
    • /
    • 제32권5호
    • /
    • pp.810-818
    • /
    • 2010
  • This paper presents a new framework for integrating untranscribed spoken content into the acoustic training of an automatic speech recognition system. Untranscribed spoken content plays a very important role for under-resourced languages because the production of manually transcribed speech databases still represents a very expensive and time-consuming task. We proposed two new methods as part of the training framework. The first method focuses on combining initial acoustic models using a data-driven metric. The second method proposes an improved acoustic training procedure based on unsupervised transcriptions, in which word endings were modified by broad phonetic classes. The training framework was applied to baseline acoustic models using untranscribed spoken content from parliamentary debates. We include three types of acoustic models in the evaluation: baseline, reference content, and framework content models. The best overall result of 18.02% word error rate was achieved with the third type. This result demonstrates statistically significant improvement over the baseline and reference acoustic models.