• Title/Summary/Keyword: 고립단어 인식

Search Result 109, Processing Time 0.02 seconds

A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter (Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

Isolated-Word Recognition Using Adaptively Partitioned Multisection Codebooks (음성적응(音聲適應) 구간분할(區間分割) 멀티섹션 코드북을 이용(利用)한 고립단어인식(孤立單語認識))

  • Ha, Kyeong-Min;Jo, Jeong-Ho;Hong, Jae-Kuen;Kim, Soo-Joong
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.10-13
    • /
    • 1988
  • An isolated-word recognition method using adaptively partitioned multisection codebooks is proposed. Each training utterance was divided into several sections according to its pattern extracted by labeling technique. For each pattern, reference codebooks were generated by clustering the training vectors of the same section. In recognition procedure, input speech was divided into the sections by the same method used in codebook generation procedure, and recognized to the reference word whose codebook represented the smallest average distortion. The proposed method was tested for 100 Korean words and attained recognition rate about 96 percent.

  • PDF

An Implementation of the Real Time Speech Recognition for the Automatic Switching System (자동 교환 시스템을 위한 실시간 음성 인식 구현)

  • 박익현;이재성;김현아;함정표;유승균;강해익;박성현
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.31-36
    • /
    • 2000
  • This paper describes the implementation and the evaluation of the speech recognition automatic exchange system. The system provides government or public offices, companies, educational institutions that are composed of large number of members and parts with exchange service using speech recognition technology. The recognizer of the system is a Speaker-Independent, Isolated-word, Flexible-Vocabulary recognizer based on SCHMM(Semi-Continuous Hidden Markov Model). For real-time implementation, DSP TMS320C32 made in Texas Instrument Inc. is used. The system operating terminal including the diagnosis of speech recognition DSP and the alternation of speech recognition candidates makes operation easy. In this experiment, 8 speakers pronounced words of 1,300 vocabulary related to automatic exchange system over wire telephone network and the recognition system achieved 91.5% of word accuracy.

  • PDF

A New Hidden Error Function for Training of Multilayer Perceptrons (다층 퍼셉트론의 층별 학습 가속을 위한 중간층 오차 함수)

  • Oh Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.57-64
    • /
    • 2005
  • LBL(Layer-By-Layer) algorithms have been proposed to accelerate the training speed of MLPs(Multilayer Perceptrons). In this LBL algorithms, each layer needs a error function for optimization. Especially, error function for hidden layer has a great effect to achieve good performance. In this sense, this paper proposes a new hidden layer error function for improving the performance of LBL algorithm for MLPs. The hidden layer error function is derived from the mean squared error of output layer. Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

Definition and Evaluation of Korean Phone-Like Units using Hidden Markov Network (HM-Net을 이용한 한국어 유사음소 단위의 재 정의와 평가)

  • Lim Young-Chun;Oh Se-Jin;Jung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.183-186
    • /
    • 2002
  • 최근 음성인식의 인식 단위로서 문맥의존 음향 모델이 널리 사용되고 있다. 이는 음소의 음향학적 특징, 즉 선행 및 후행음소에 의한 중심 음소의 변이음 모델이 문맥독립 모델보다 좀 더 정확하게 모델링 될 수 있기 때문이다. 하지만 강건한 문맥의존 음향 모델을 작성하기 위해서는 모델 파라미터의 병합(tying)과 미지의 문맥(unseen context)의 처리를 위한 좀더 정교한 해결 방법이 필요하다. 따라서 본 논문에서는 이점을 고려하여 음향학적 특징과 언어학적 특징을 결합하여 상태 분할을 수행할 수 있도록 SSS(Successive State Splitting) 알고리즘의 문맥 방향 상태 분할에 음소결정트리를 접목한 HM-Net(Hidden Markov Network) 구조 결정법을 도입하였다. 또한 HM-Net은 연속적인 상태 분할에 의해 한국어에서 많이 발생하는 변이음들을 효과적으로 모델링 할 수 있다는 점을 고려하여 본 연구실에서 기존에 사용하던 48 유사음소 단위에서 문맥의존 음향 모델 작성에 불필요한 변이음을 제거하여 39 유사음소 단위를 재 정의하였다. 도입한 방법과 새로 정의한 유사음소 단위의 유효성을 확인하기 위해 고립 단어, 4연속 숫자음, 연속 음성인식에 대해 인식 실험을 수행한 결과, 모든 실험에서 재 정의한 39 유사음소 단위가 문맥종속형 HM-Net 음향모델을 이용한 한국어 음성인식에 효과적임을 확인할 수 있었다. 특히 연속 음성인식 실험의 경우, 기존의 48 유사음소 단위보다 평균 $15.08\%$의 인식률 향상이 있었다.

  • PDF

Improvement of the Linear Predictive Coding with Windowed Autocorrelation (윈도우가 적용된 자기상관에 의한 선형예측부호의 개선)

  • Lee, Chang-Young;Lee, Chai-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.2
    • /
    • pp.186-192
    • /
    • 2011
  • In this paper, we propose a new procedure for improvement of the linear predictive coding. To reduce the error power incurred by the coding, we interchanged the order of the two procedures of windowing on the signal and linear prediction. This scheme corresponds to LPC extraction with windowed autocorrelation. The proposed method requires more calculational time because it necessitates matrix inversion on more parameters than the conventional technique where an efficient Levinson-Durbin recursive procedure is applicable with smaller parameters. Experimental test over various speech phonemes showed, however, that our procedure yields about 5 % less power distortion compared to the conventional technique. Consequently, the proposed method in this paper is thought to be preferable to the conventional technique as far as the fidelity is concerned. In a separate study of speaker-dependent speech recognition test for 50 isolated words pronounced by 40 people, our approach yielded better performance too.

Isolated Word Recognition with the E-MIND II Neurocomputer (E-MIND II를 이용한 고립 단어 인식 시스템의 설계)

  • Kim, Joon-Woo;Jeong, Hong;Kim, Myeong-Won
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.11
    • /
    • pp.1527-1535
    • /
    • 1995
  • This paper introduces an isolated word recognition system realized on a neurocomputer called E-MIND II, which is a 2-D torus wavefront array processor consisting of 256 DNP IIs. The DNP II is an all digital VLSI unit processor for the EMIND II featuring the emulation capability of more than thousands of neurons, the 40 MHz clock speed, and the on-chip learning. Built by these PEs in 2-D toroidal mesh architecture, the E- MIND II can be accelerated over 2 Gcps computation speed. In this light, the advantages of the E-MIND II in its capability of computing speed, scalability, computer interface, and learning are especially suitable for real time application such as speech recognition. We show how to map a TDNN structure on this array and how to code the learning and recognition algorithms for a user independent isolated word recognition. Through hardware simulation, we show that recognition rate of this system is about 97% for 30 command words for a robot control.

  • PDF

An Improvement of Stochastic Feature Extraction for Robust Speech Recognition (강인한 음성인식을 위한 통계적 특징벡터 추출방법의 개선)

  • 김회린;고진석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.180-186
    • /
    • 2004
  • The presence of noise in speech signals degrades the performance of recognition systems in which there are mismatches between the training and test environments. To make a speech recognizer robust, it is necessary to compensate these mismatches. In this paper, we studied about an improvement of stochastic feature extraction based on band-SNR for robust speech recognition. At first, we proposed a modified version of the multi-band spectral subtraction (MSS) method which adjusts the subtraction level of noise spectrum according to band-SNR. In the proposed method referred as M-MSS, a noise normalization factor was newly introduced to finely control the over-estimation factor depending on the band-SNR. Also, we modified the architecture of the stochastic feature extraction (SFE) method. We could get a better performance when the spectral subtraction was applied in the power spectrum domain than in the mel-scale domain. This method is denoted as M-SFE. Last, we applied the M-MSS method to the modified stochastic feature extraction structure, which is denoted as the MMSS-MSFE method. The proposed methods were evaluated on isolated word recognition under various noise environments. The average error rates of the M-MSS, M-SFE, and MMSS-MSFE methods over the ordinary spectral subtraction (SS) method were reduced by 18.6%, 15.1%, and 33.9%, respectively. From these results, we can conclude that the proposed methods provide good candidates for robust feature extraction in the noisy speech recognition.

Selective Attentive Learning for Fast Speaker Adaptation in Multilayer Perceptron (다층 퍼셉트론에서의 빠른 화자 적응을 위한 선택적 주의 학습)

  • 김인철;진성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.48-53
    • /
    • 2001
  • In this paper, selectively attentive learning method has been proposed to improve the learning speed of multilayer Perceptron based on the error backpropagation algorithm. Three attention criterions are introduced to effectively determine which set of input patterns is or which portion of network is attended to for effective learning. Such criterions are based on the mean square error function of the output layer and class-selective relevance of the hidden nodes. The acceleration of learning time is achieved by lowering the computational cost per iteration. Effectiveness of the proposed method is demonstrated in a speaker adaptation task of isolated word recognition system. The experimental results show that the proposed selective attention technique can reduce the learning time more than 60% in an average sense.

  • PDF