• Title/Summary/Keyword: 고립단어 인식

Search Result 109, Processing Time 0.025 seconds

Isolated word recognition using binary pattern (이치화 패턴을 이용한 고립단어 음성인식)

  • Ryoo, J.H.;Lee, Y.J.;Park, C.K.;Kim, Y.H.;Kim, K.T.
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1602-1605
    • /
    • 1987
  • This paper describes the isolated word recognition using binary patterns denoting the presence or absence of a local peak at a particular channel. In closed test, 81.3% and 76.8% of correct recognition rate were achieved in case of 10 males and 10 females with each 1588 test samples.

  • PDF

Phoneme Segmentation Using Voice/Unvoiced/Silence Classifier and Spectral Information (유성/무성/묵음 분류기와 주파수 스펙트럼을 이용한 음소 경계 검출)

  • Lee Sang-Rae;Han Hyun-Bae;Hahn Minsoo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.86-91
    • /
    • 1999
  • 본 논문에서는 유성/무성/묵음 분류기와 주파수 스펙트럼 비교를 통하여 음소 경계 검출기를 구현하였다. 음소경계 검출은 음성 인식, 합성 및 분석 둥의 분야에서 매우 중요하다 유성/무성/묵음 분류기를 이용하여 유성음으로 판별되는 구간은 스펙트럼 비교를 통하여 음소 단위로 세분하였고 무성음으로 판별되는 구간은 한국어의 음성 특성을 고려하여 하나의 음소 단위로 간주하였다. 유성음 구간에 대한 스펙트럼 비교는 수정된 Itakura-Saito distance measure 와 Euclidean MFCC(Mel Frequency Cepstrum Coeffcients) distance measure를 사용하였고 비교 프레임은한 프레임을 건너 윈 경우가 가장 결과가 좋았다. 최종적으로 평균 음소 길이 정보를 이용하여 음소의 경계로 검출된 구간을 더 세분하거나 통합하였다. 유성/무성/묵음 분류기의 경우는 사무실에서 녹음한 고립단어에 대하여 $94.247\%$의 정확도를 보였고 음소 경계 검출의 경우는 $72.8\%$의 정확도를 보였다.

  • PDF

Isolated Word Recognition Using Hidden Markov Models with Bounded State Duration (제한적 상태지속시간을 갖는 HMM을 이용한 고립단어 인식)

  • 이기희;임인칠
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.756-764
    • /
    • 1995
  • In this paper, we proposed MLP(MultiLayer Perceptron) based HMM's(Hidden Markov Models) with bounded state duration for isolated word recognition. The minimum and maximum state duration for each state of a HMM are estimated during the training phase and used as parameters of constraining state transition in a recognition phase. The procedure for estimating these parameters and the recognition algorithm using the proposed HMM's are also described. Speaker independent isolated word recognition experiments using a vocabulary of 10 city names and 11 digits indicate that recognition rate can be improved by adjusting the minimum state durations.

  • PDF

Isolated Word Recognition Using a Speaker-Adaptive Neural Network (화자적응 신경망을 이용한 고립단어 인식)

  • 이기희;임인칠
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.765-776
    • /
    • 1995
  • This paper describes a speaker adaptation method to improve the recognition performance of MLP(multiLayer Perceptron) based HMM(Hidden Markov Model) speech recognizer. In this method, we use lst-order linear transformation network to fit data of a new speaker to the MLP. Transformation parameters are adjusted by back-propagating classification error to the transformation network while leaving the MLP classifier fixed. The recognition system is based on semicontinuous HMM's which use the MLP as a fuzzy vector quantizer. The experimental results show that rapid speaker adaptation resulting in high recognition performance can be accomplished by this method. Namely, for supervised adaptation, the error rate is signifecantly reduced from 9.2% for the baseline system to 5.6% after speaker adaptation. And for unsupervised adaptation, the error rate is reduced to 5.1%, without any information from new speakers.

  • PDF

Performance Comparison between the PMC and VTS Method for the Isolated Speech Recognition in Car Noise Environments (자동차 잡음환경 고립단어 음성인식에서의 VTS와 PMC의 성능비교)

  • Chung, Yong-Joo;Lee, Seung-Wook
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.251-261
    • /
    • 2003
  • There has been many research efforts to overcome the problems of speech recognition in noisy conditions. Among the noise-robust speech recognition methods, model-based adaptation approaches have been shown quite effective. Particularly, the PMC (parallel model combination) method is very popular and has been shown to give considerably improved recognition results compared with the conventional methods. In this paper, we experimented with the VTS (vector Taylor series) algorithm which is also based on the model parameter transformation but has not attracted much interests of the researchers in this area. To verify the effectiveness of it, we employed the algorithm in the continuous density HMM (Hidden Markov Model). We compared the performance of the VTS algorithm with the PMC method and could see that the it gave better results than the PMC method.

  • PDF

Isolated word recognition using the SOFM-HMM and the Inertia (관성과 SOFM-HMM을 이용한 고립단어 인식)

  • 윤석현;정광우;홍광석;박병철
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.6
    • /
    • pp.17-24
    • /
    • 1994
  • This paper is a study on Korean word recognition and suggest the method that stabilizes the state-transition in the HMM by applying the `inertia' to the feature vector sequences. In order to reduce the quantized distortion considering probability distribution of input vectors, we used SOFM, an unsupervised learning method, as a vector quantizer, By applying inertia to the feature vector sequences, the overlapping of probability distributions for the response path of each word on the self organizing feature map can be reduced and the state-transition in the Hmm can be Stabilized. In order to evaluate the performance of the method, we carried out experiments for 50 DDD area names. The results showed that applying inertia to the feature vector sequence improved the recognition rate by 7.4% and can make more HMMs available without reducing the recognition rate for the SOFM having the fixed number of neuron.

  • PDF

Exclusion of Non-similar Candidates using Positional Accuracy based on Levenstein Distance from N-best Recognition Results of Isolated Word Recognition (레벤스타인 거리에 기초한 위치 정확도를 이용한 고립 단어 인식 결과의 비유사 후보 단어 제외)

  • Yun, Young-Sun;Kang, Jeom-Ja
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.109-115
    • /
    • 2009
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. In this paper, we investigate several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. At first, word distance method based on phone and syllable distances are considered. These methods use just Levenstein distance on phones or double Levenstein distance algorithm on syllables of candidates. Next, word similarity approaches are presented that they use characters' position information of word candidates. Each character's position is labeled to inserted, deleted, and correct position after alignment between source and target string. The word similarities are obtained from characters' positional probabilities which mean the frequency ratio of the same characters' observations on the position. From experimental results, we can find that the proposed methods are effective for removing non-similar words without loss of system performance from the N-best recognition candidates of the systems.

  • PDF

Various Approaches to Improve Exclusion Performance of Non-similar Candidates from N-best Recognition Results on Isolated Word Recognition (고립 단어 인식 결과의 비유사 후보 단어 제외 성능을 개선하기 위한 다양한 접근 방법 연구)

  • Yun, Young-Sun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.153-161
    • /
    • 2010
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. The previous study [1,2] investigated several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. This paper discusses the various improving techniques of removing non-similar recognition results. The mentioned methods include comparison penalties or weights, phone accuracy based on confusion information, weights candidates by ranking order and partial comparisons. Through experimental results, it is found that some proposed method keeps more accurate recognition results than the previous method's results.

  • PDF

Noisy Speech Recognition using Probabilistic Spectral Subtraction (확률적 스펙트럼 차감법을 이용한 잡은 환경에서의 음성인식)

  • Chi, Sang-Mun;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.94-99
    • /
    • 1997
  • This paper describes a technique of probabilistic spectral subtraction which uses the knowledge of both noise and speech so as to reduce automatic speech recognition errors in noisy environments. Spectral subtraction method estimates a noise prototype in non-speech intervals and the spectrum of clean speech is obtained from the spectrum of noisy speech by subtracting this noise prototype. Thus noise can not be suppressed effectively using a single noise prototype in case the characteristics of the noise prototype are different from those of the noise contained in input noisy speech. To modify such a drawback, multiple noise prototypes are used in probabilistic subtraction method. In this paper, the probabilistic characteristics of noise and the knowledge of speech which is embedded in hidden Markov models trained in clean environments are used to suppress noise. Futhermore, dynamic feature parameters are considered as well as static feature parameters for effective noise suppression. The proposed method reduced error rates in the recognition of 50 Korean words. The recognition rate was 86.25% with the probabilistic subtraction, 72.75% without any noise suppression method and 80.25% with spectral subtraction at SNR(Signal-to-Noise Ratio) 10 dB.

  • PDF

The Effect of the Number of Phoneme Clusters on Speech Recognition (음성 인식에서 음소 클러스터 수의 효과)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1221-1226
    • /
    • 2014
  • In an effort to improve the efficiency of the speech recognition, we investigate the effect of the number of phoneme clusters. For this purpose, codebooks of varied number of phoneme clusters are prepared by modified k-means clustering algorithm. The subsequent processing is fuzzy vector quantization (FVQ) and hidden Markov model (HMM) for speech recognition test. The result shows that there are two distinct regimes. For large number of phoneme clusters, the recognition performance is roughly independent of it. For small number of phoneme clusters, however, the recognition error rate increases nonlinearly as it is decreased. From numerical calculation, it is found that this nonlinear regime might be modeled by a power law function. The result also shows that about 166 phoneme clusters would be the optimal number for recognition of 300 isolated words. This amounts to roughly 3 variations per phoneme.