• Title/Summary/Keyword: Utterance verification

Search Result 42, Processing Time 0.022 seconds

An Utterance Verification using Vowel String (모음 열을 이용한 발화 검증)

  • 유일수;노용완;홍광석
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.46-49
    • /
    • 2003
  • The use of confidence measures for word/utterance verification has become art essential component of any speech input application. Confidence measures have applications to a number of problems such as rejection of incorrect hypotheses, speaker adaptation, or adaptive modification of the hypothesis score during search in continuous speech recognition. In this paper, we present a new utterance verification method using vowel string. Using subword HMMs of VCCV unit, we create anti-models which include vowel string in hypothesis words. The experiment results show that the utterance verification rate of the proposed method is about 79.5%.

  • PDF

Utterance Verification Using Anti-models Based on Neighborhood Information (이웃 정보에 기초한 반모델을 이용한 발화 검증)

  • Yun, Young-Sun
    • MALSORI
    • /
    • no.67
    • /
    • pp.79-102
    • /
    • 2008
  • In this paper, we investigate the relation between Bayes factor and likelihood ratio test (LRT) approaches and apply the neighborhood information of Bayes factor to building an alternate hypothesis model of the LRT system. To consider the neighborhood approaches, we contemplate a distance measure between models and algorithms to be applied. We also evaluate several methods to improve performance of utterance verification using neighborhood information. Among these methods, the system which adopts anti-models built by collecting mixtures of neighborhood models obtains maximum error rate reduction of 17% compared to the baseline, linear and weighted combination of neighborhood models.

  • PDF

Utterance Verification and Substitution Error Correction In Korean Connected Digit Recognition (한국어 연결숫자 인식에서의 발화 검증과 대체오류 수정)

  • Jung Du Kyung;Song Hwa Jeon;Jung Ho-Young;Kim Hyung Soon
    • MALSORI
    • /
    • no.45
    • /
    • pp.79-91
    • /
    • 2003
  • Utterance verification aims at rejecting both out-of-vocabulary (OOV) utterances and low-confidence-scored in-vocabulary (IV) utterances. For utterance verification on Korean connected digit recognition task, we investigate several methods to construct filler and anti-digit models. In particular, we propose a substitution error correction method based on 2-best decoding results. In this method, when 1st candidate is rejected, 2nd candidate is selected if it is accepted by a specific hypothesis test, instead of simply rejecting the 1st one. Experimental results show that the proposed method outperforms the conventional log likelihood ratio (LLR) test method.

  • PDF

Utterance Verification using Phone-Level Log-Likelihood Ratio Patterns in Word Spotting Systems (핵심어 인식기에서 단어의 음소레벨 로그 우도 비율의 패턴을 이용한 발화검증 방법)

  • Kim, Chong-Hyon;Kwon, Suk-Bong;Kim, Hoi-Rin
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.55-62
    • /
    • 2009
  • This paper proposes an improved method to verify a keyword segment that results from a word spotting system. First a baseline word spotting system is implemented. In order to improve performance of the word spotting systems, we use a two-pass structure which consists of a word spotting system and an utterance verification system. Using the basic likelihood ratio test (LRT) based utterance verification system to verify the keywords, there have been certain problems which lead to performance degradation. So, we propose a method which uses phone-level log-likelihood ratios (PLLR) patterns in computing confidence measures for each keyword. The proposed method generates weights according to the PLLR patterns and assigns different weights to each phone in the process of generating confidence measures for the keywords. This proposed method has shown to be more appropriate to word spotting systems and we can achieve improvement in final word spotting accuracy.

  • PDF

Utterance Verification Using Search Confusion Rate and Its N-Best Approach

  • Kim, Kyu-Hong;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.27 no.4
    • /
    • pp.461-464
    • /
    • 2005
  • Recently, a variety of confidence measures for utterance verification has been studied to improve speech recognition performance by rejecting out-of-vocabulary inputs. Most of the conventional confidence measures for utterance verification are based primarily on hypothesis testing or an approximated posterior probability, and their performances depend on the robustness of an alternative hypothesis or the prior probability. We introduce a novel confidence measure called a search confusion rate (SCR), which does not require an alternative hypothesis or the approximation of posterior probability. Our confusion-based approach shows better performance in additive noise-corrupted speech as well as in clean speech.

  • PDF

An Adaptive Utterance Verification Framework Using Minimum Verification Error Training

  • Shin, Sung-Hwan;Jung, Ho-Young;Juang, Biing-Hwang
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.423-433
    • /
    • 2011
  • This paper introduces an adaptive and integrated utterance verification (UV) framework using minimum verification error (MVE) training as a new set of solutions suitable for real applications. UV is traditionally considered an add-on procedure to automatic speech recognition (ASR) and thus treated separately from the ASR system model design. This traditional two-stage approach often fails to cope with a wide range of variations, such as a new speaker or a new environment which is not matched with the original speaker population or the original acoustic environment that the ASR system is trained on. In this paper, we propose an integrated solution to enhance the overall UV system performance in such real applications. The integration is accomplished by adapting and merging the target model for UV with the acoustic model for ASR based on the common MVE principle at each iteration in the recognition stage. The proposed iterative procedure for UV model adaptation also involves revision of the data segmentation and the decoded hypotheses. Under this new framework, remarkable enhancement in not only recognition performance, but also verification performance has been obtained.

A Study on Out-of-Vocabulary Rejection Algorithms using Variable Confidence Thresholds (가변 신뢰도 문턱치를 사용한 미등록어 거절 알고리즘에 대한 연구)

  • Bhang, Ki-Duck;Kang, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1471-1479
    • /
    • 2008
  • In this paper, we propose a technique to improve Out-Of-Vocabulary(OOV) rejection algorithms in variable vocabulary recognition system which is much used in ASR(Automatic Speech Recognition). The rejection system can be classified into two categories by their implementation method, keyword spotting method and utterance verification method. The utterance verification method uses the likelihood ratio of each phoneme Viterbi score relative to anti-phoneme score for deciding OOV. In this paper, we add speaker verification system before utterance verification and calculate an speaker verification probability. The obtained speaker verification probability is applied for determining the proposed variable-confidence threshold. Using the proposed method, we achieve the significant performance improvement; CA(Correctly Accepted for keyword) 94.23%, CR(Correctly Rejected for out-of-vocabulary) 95.11% in office environment, and CA 91.14%, CR 92.74% in noisy environment.

  • PDF

Confidence Measure for Utterance Verification in Noisy Environments (잡음 환경에서의 인식 거부 성능 향상을 위한 신뢰 척도)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.3-6
    • /
    • 2006
  • This paper proposes a confidence measure employed for utterance verification in noisy environments. Most of conventional approaches estimate the proper threshold of confidence measure and apply the value to utterance rejection in recognition process. As such, their performance may degrade for noisy speech since the threshold can be changed in noisy environments. This paper presents further robust confidence measure based on the multi-pass confidence measure. The isolated word recognition based experimental results demonstrate that the proposed method outperforms conventional approaches as utterance verifier.

  • PDF

Deep neural networks for speaker verification with short speech utterances (짧은 음성을 대상으로 하는 화자 확인을 위한 심층 신경망)

  • Yang, IL-Ho;Heo, Hee-Soo;Yoon, Sung-Hyun;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.6
    • /
    • pp.501-509
    • /
    • 2016
  • We propose a method to improve the robustness of speaker verification on short test utterances. The accuracy of the state-of-the-art i-vector/probabilistic linear discriminant analysis systems can be degraded when testing utterance durations are short. The proposed method compensates for utterance variations of short test feature vectors using deep neural networks. We design three different types of DNN (Deep Neural Network) structures which are trained with different target output vectors. Each DNN is trained to minimize the discrepancy between the feed-forwarded output of a given short utterance feature and its original long utterance feature. We use short 2-10 s condition of the NIST (National Institute of Standards Technology, U.S.) 2008 SRE (Speaker Recognition Evaluation) corpus to evaluate the method. The experimental results show that the proposed method reduces the minimum detection cost relative to the baseline system.

Approximated Posterior Probability for Scoring Speech Recognition Confidence

  • Kim Kyuhong;Kim Hoirin
    • MALSORI
    • /
    • no.52
    • /
    • pp.101-110
    • /
    • 2004
  • This paper proposes a new confidence measure for utterance verification with posterior probability approximation. The proposed method approximates probabilistic likelihoods by using Viterbi search characteristics and a clustered phoneme confusion matrix. Our measure consists of the weighted linear combination of acoustic and phonetic confidence scores. The proposed algorithm shows better performance even with the reduced computational complexity than those utilizing conventional confidence measures.

  • PDF