• Title/Summary/Keyword: Automatic Speech Recognition

Search Result 213, Processing Time 0.027 seconds

Machine Scoring Methods Highly-correlated with Human Ratings in Speech Recognizer Detecting Mispronunciation of Foreign Language (한국인의 외국어 발화오류검출 음성인식기에서 청취판단과 상관관계가 높은 기계 스코어링 기법)

  • Bae, Min-Young;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.217-226
    • /
    • 2004
  • An automatic pronunciation correction system provides users with correction guidelines for each pronunciation error. For this purpose, we develop a speech recognition system which automatically classifies pronunciation errors when Koreans speak a foreign language. In this paper, we propose a machine scoring method for automatic assessment of pronunciation quality by the speech recognizer. Scores obtained from an expert human listener are used as the reference to evaluate the different machine scores and to provide targets when training some of algorithms. We use a log-likelihood score and a normalized log-likelihood score as machine scoring methods. Experimental results show that the normalized log-likelihood score had higher correlation with human scores than that obtained using the log-likelihood score.

  • PDF

Possibililty of the Rough Set Approach in Phonetic Distinctions

  • Lee, Jae-Ik;Kim, Jeong-Kuk;Jo, Heung-Kuk;Suh, Kap-Sun
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1996.10a
    • /
    • pp.66-69
    • /
    • 1996
  • The goal of automatic speech recognition is to study techniques and systems that enable agents such that computers and robots to accept speech input. However, this paper does not provide a concrete technology in speech recognition but propose a possible mathematical tools to be employed in that area. We introduce rough set theory and suggest the possibility of the rough set approach in phonetic distinctions.

  • PDF

Improvements on Speech Recognition for Fast Speech (고속 발화음에 대한 음성 인식 향상)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.88-95
    • /
    • 2006
  • In this Paper. a method for improving the performance of automatic speech recognition (ASR) system for conversational speech is proposed. which mainly focuses on increasing the robustness against the rapidly speaking utterances. The proposed method doesn't require an additional speech recognition task to represent speaking rate quantitatively. Energy distribution for special bands is employed to detect the vowel regions, the number of vowels Per unit second is then computed as speaking rate. To improve the Performance for fast speech. in the pervious methods. a sequence of the feature vectors is expanded by a given scaling factor, which is computed by a ratio between the standard phoneme duration and the measured one. However, in the method proposed herein. utterances are classified by their speaking rates. and the scaling factor is determined individually for each class. In this procedure, a maximum likelihood criterion is employed. By the results from the ASR experiments devised for the 10-digits mobile phone number. it is confirmed that the overall error rate was reduced by $17.8\%$ when the proposed method is employed

On-Line Blind Channel Normalization for Noise-Robust Speech Recognition

  • Jung, Ho-Young
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.143-151
    • /
    • 2012
  • A new data-driven method for the design of a blind modulation frequency filter that suppresses the slow-varying noise components is proposed. The proposed method is based on the temporal local decorrelation of the feature vector sequence, and is done on an utterance-by-utterance basis. Although the conventional modulation frequency filtering approaches the same form regardless of the task and environment conditions, the proposed method can provide an adaptive modulation frequency filter that outperforms conventional methods for each utterance. In addition, the method ultimately performs channel normalization in a feature domain with applications to log-spectral parameters. The performance was evaluated by speaker-independent isolated-word recognition experiments under additive noise environments. The proposed method achieved outstanding improvement for speech recognition in environments with significant noise and was also effective in a range of feature representations.

  • PDF

A Phonetics Based Design of PLU Sets for Korean Speech Recognition (한국어 음성인식을 위한 음성학 기반의 유사음소단위 집합 설계)

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • MALSORI
    • /
    • no.65
    • /
    • pp.105-124
    • /
    • 2008
  • This paper presents the effects of different phone-like-unit (PLU) sets in order to propose an optimal PLU set for the performance improvement of Korean automatic speech recognition (ASR) systems. The examination of 9 currently used PLU sets indicates that most of them include a selection of allophones without any sufficient phonetic base. In this paper, a total of 34 PLU sets are designed based on Korean phonetic characteristics arid the effects of each PLU set are evaluated through experiments. The results show that the accuracy rate of each phone is influenced by different phonetic constraint(s) which determine(s) the PLU sets, and that an optimal PLU set can be anticipated through the phonetic analysis of the given speech data.

  • PDF

Spectral Subtraction Using Spectral Harmonics for Robust Speech Recognition in Car Environments

  • Beh, Jounghoon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2E
    • /
    • pp.62-68
    • /
    • 2003
  • This paper addresses a novel noise-compensation scheme to solve the mismatch problem between training and testing condition for the automatic speech recognition (ASR) system, specifically in car environment. The conventional spectral subtraction schemes rely on the signal-to-noise ratio (SNR) such that attenuation is imposed on that part of the spectrum that appears to have low SNR, and accentuation is made on that part of high SNR. However, these schemes are based on the postulation that the power spectrum of noise is in general at the lower level in magnitude than that of speech. Therefore, while such postulation is adequate for high SNR environment, it is grossly inadequate for low SNR scenarios such as that of car environment. This paper proposes an efficient spectral subtraction scheme focused specifically to low SNR noisy environment by extracting harmonics distinctively in speech spectrum. Representative experiments confirm the superior performance of the proposed method over conventional methods. The experiments are conducted using car noise-corrupted utterances of Aurora2 corpus.

Comparison of Phone Boundary Alignment between Handlabels and Autolabels

  • Jang, Tae-Yeoub;Chung, Hyun-Song
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.27-39
    • /
    • 2003
  • This study attempts to verify the reliability of automatically generated segment labels as compared to those obtained by conventional labelling by hand. First of all, an autolabeller is constructed using the standard HMM speech recognition technique. For evaluation, we compare the automatically generated labels with manually annotated labels for the same speech data. The comparison is performed by calculating the temporal difference between an autolabel boundary and its corresponding hand label boundary. When the mismatched duration between two labels falls within 10 msec, we consider the autolabel as correct. The results suggest that overall 78% of autolabels are correctly obtained. It is found that the boundary of obstruents is better aligned than that of sonorants and vowels. In case of stop sound classes, strong stops in manner-of-articulation wise and velar stops in place-of-articulation wise show better performance in boundary alignment. The result suggests that more phone-specific consideration is necessary to improve autosegmentation performance.

  • PDF

Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets

  • Iswarya, P.;Radha, V.
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1135-1148
    • /
    • 2017
  • Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

Speech Recognition of the Korean Vowel 'ㅡ' based on Neural Network Learning of Bulk Indicators (벌크 지표의 신경망 학습에 기반한 한국어 모음 'ㅡ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.11
    • /
    • pp.617-624
    • /
    • 2017
  • Speech recognition is now one of the most widely used technologies in HCI. Many applications where speech recognition may be used (such as home automation, automatic speech translation, and car navigation) are now under active development. In addition, the demand for speech recognition systems in mobile environments is rapidly increasing. This paper is intended to present a method for instant recognition of the Korean vowel 'ㅡ', as a part of a Korean speech recognition system. The proposed method uses bulk indicators (which are calculated in the time domain) instead of the frequency domain and consequently, the computational cost for the recognition can be reduced. The bulk indicators representing predominant sequence patterns of the vowel 'ㅡ' are learned by neural networks and final recognition decisions are made by those trained neural networks. The results of the experiment show that the proposed method can achieve 88.7% recognition accuracy, and recognition speed of 0.74 msec per syllable.

Three-Stage Framework for Unsupervised Acoustic Modeling Using Untranscribed Spoken Content

  • Zgank, Andrej
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.810-818
    • /
    • 2010
  • This paper presents a new framework for integrating untranscribed spoken content into the acoustic training of an automatic speech recognition system. Untranscribed spoken content plays a very important role for under-resourced languages because the production of manually transcribed speech databases still represents a very expensive and time-consuming task. We proposed two new methods as part of the training framework. The first method focuses on combining initial acoustic models using a data-driven metric. The second method proposes an improved acoustic training procedure based on unsupervised transcriptions, in which word endings were modified by broad phonetic classes. The training framework was applied to baseline acoustic models using untranscribed spoken content from parliamentary debates. We include three types of acoustic models in the evaluation: baseline, reference content, and framework content models. The best overall result of 18.02% word error rate was achieved with the third type. This result demonstrates statistically significant improvement over the baseline and reference acoustic models.