• Title/Summary/Keyword: Vocabulary recognition

Search Result 221, Processing Time 0.025 seconds

Performance Evaluation of HM-Net Speech Recognition System using Korea Large Vocabulary Speech DB (한국어 대어휘 음성DB를 이용한 HM-Net 음성인식 시스템의 성능평가)

  • 오세진;김광동;노덕규;송민규;김범국;황철준;정현열
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2443-2446
    • /
    • 2003
  • 본 논문에서는 한국전자통신연구원에서 제공된 대어휘 음성DB를 이용하여 HM-Net(Hidden Markov Network) 음성인식 시스템의 성능평가를 수행하였다. 음향모델 작성은 음성인식에서 널리 사용되고 있는 통계적인 모델링 방법인 HMM(Hidden Markov Model)을 개량한 HM-Net을 도입하였다 HM-Net은 PDT-SSS 알고리즘에 의해 문맥방향과 시간방향의 상태분할을 수행하여 생성되는데, 특히 문맥방향 상태분할의 경우 학습 음성데이터에 출현하지 않는 문맥정보를 효과적으로 표현하기 위해 음소결정트리를 채용하고 있으며, 시간방향 상태분할의 경우 학습 음성데이터에서 각 음소별 지속시간 정보를 효과적으로 표현하기 위한 상태분할을 수행한다. 이러한 상태분할을 수행하여 파라미터를 공유하게 되며 최적인 모델 네트워크를 작성하게 된다. 대어휘 음성데이터를 이용하여 음향모델을 작성하고 인식실험을 수행한 결과, 100명의 100단어와 60문장에 대해 평균 97.5%, 96.7%의 인식률을 보였다.

  • PDF

A Computational Model of Language Learning Driven by Training Inputs

  • Lee, Eun-Seok;Lee, Ji-Hoon;Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2010.05a
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

A Study on the Recognition-Rate Improvement by the Keyword Spotting System using CM Algorithm (CM 알고리즘을 이용한 핵심어 검출 시스템의 인식률 향상에 관한 연구)

  • Won Jong-Moon;Lee Jung-Suk;Kim Soon-Hyob
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.81-84
    • /
    • 2001
  • 본 논문은 중규모 단어급의 핵심어 검출 시스템에서 인식률 향상을 위해 미등록어 거절(Out-of-Vocabulary rejection) 기능을 제어하기 위한 연구이다. 이것은 핵심어 검출기에서 인식된 결과를 확인하는 과정으로 검증시스템이 구현되기 위해서는 매 음소마다 검증 기능이 필요하고, 이를 위해서 반음소(anti-phoneme model) 모델을 사용하였다. 검증의 역할은 인식기에서 인식된 단어가 등록어인지 미등록어인지 판별하는 것이다. 단어인식기는 비터비 탐색을 하므로, 기본적으로 단어단위로 인식을 하지만 그 인식된 단어는 내부적으로 음소단위로 인식된다. 따라서, 최소 검증 오류를 갖는 반음소 모델을 사용하고, 이를 이용하여 인식된 음소 단위들을 각각의 반음소 모델과 비교하여 통계적인 방법에 의해 신뢰도를 구한다 이 음소단위의 신뢰도를 단어 단위의 신뢰도로 환산하기 위해서 음소단위를 평균 내는 방식 을 취한다. 이렇게 함으로서, 등록어와 미등록어 사이의 분별력을 크게 하여 향상된 인식 성능을 얻었다.

  • PDF

Morphological analysis of spoken Korean using Viterbi search (Viterbi 검색 기법을 이용한 한국어 음성 언어의 형태소 분석)

  • 김병창
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.200-203
    • /
    • 1995
  • This paper proposes a spoken Korean processing model which is extensible to large vocabulary continuous spoken Korean system. The integration of phoneme level speech recognition with natural language processing can support a sophisticated phonological/morphological analysis. The model consists of a diphone speech recognizer, a viterbi dictionaly searcher and a morpheme connectivity information checker. Two-level hierarchical TDNNs recognize newly defined Korean diphones. The diphone sequences are segmented and converted to the most probable morpheme sequences by the Viterbi dictionary searcher. The morpheme sequency are then examined by the morpheme connectivity information checker and the correct morpheme sequence which has the greatest probability is collected. The experiments show that the morphological analysis for spoken Korean can be achieved for 328 Eojeols with 80.6% success rate.

  • PDF

Pronunciation Lexicon Optimization with Applying Variant Selection Criteria (발음 변이의 발음사전 포함 결정 조건을 통한 발음사전 최적화)

  • Jeon, Je-Hun;Chung, Min-Hwa
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.24-27
    • /
    • 2006
  • This paper describes how a domain dependent pronunciation lexicon is generated and optimized for Korean large vocabulary continuous speech recognition(LVCSR). At the level of lexicon, pronunciation variations are usually modeled by adding pronunciation variants to the lexicon. We propose the criteria for selecting appropriate pronunciation variants in lexicon: (i) likelihood and (ii) frequency factors to select variants. Our experiment is conducted in three steps. First, the variants are generated with knowledge-based rules. Second, we generate a domain dependent lexicon which includes various numbers of pronunciation variants based on the proposed criteria. Finally, the WERs and RTFs are examined with each lexicon. In the experiment, 0.72% WER reduction is obtained by introducing the variants pruning criteria. Furthermore, RTF is not deteriorated although the average number of variants is higher than that of compared lexica.

  • PDF

Performance Evaluation of Large Vocabulary Continuous Speech Recognition System (대어휘 연속음성 인식 시스템의 성능평가)

  • Kim Joo-Gon;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.99-102
    • /
    • 2002
  • 본 논문에서는 한국어 대어휘 연속음성 인식 시스템의 성능향상을 위하여 Multi-Pass 탐색 방법을 도입하고, 그 유효성을 확인하고자 한다. 연속음성 인식실험을 위하여, 최근 실험용으로 널리 사용되고 있는 HTK와 Multi-Pass 탐색 방법을 이용한 음성인식 시스템의 비교 실험을 수행한다. 대어휘 연속음성 인식 시스템에 사용한 언어 모델은 ARPA 표준 형식의 단어 N-gram 언어모델로, 1-pass에서는 2-gram 언어모델을, 2-pass 에서는 역방향 3-gram 언어모델을 이용하여 Multi-Pass 탐색 방법으로 인식을 수행한다. 본 논문에서는 Multi-Pass 탐색 방법을 한국어 연속음성인식에 적합하게 구성한 후, 다양한 한국어 음성 데이터 베이스를 이용하여 인식실험을 수행하였다. 그 결과, 전화망을 통하여 수집된 잡음이 포함된 증권거래용 연속음성 데이터 베이스를 이용한 연속음성 인식실험에서 HTK가 $59.50\%$, Multi-Pass 탐색 방법을 이용한 시스템은 $73.31\%$의 인식성능을 나타내어 HTK를 이용한 연속음성 인식률 보다 약 $13\%$의 인식률 향상을 나타내었다.

  • PDF

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF

Fast Speaker Adaptation and Environment Compensation Based on Eigenspace-based MLLR (Eigenspace-based MLLR에 기반한 고속 화자적응 및 환경보상)

  • Song Hwa-Jeon;Kim Hyung-Soon
    • MALSORI
    • /
    • no.58
    • /
    • pp.35-44
    • /
    • 2006
  • Maximum likelihood linear regression (MLLR) adaptation experiences severe performance degradation with very tiny amount of adaptation data. Eigenspace- based MLLR, as an alternative to MLLR for fast speaker adaptation, also has a weak point that it cannot deal with the mismatch between training and testing environments. In this paper, we propose a simultaneous fast speaker and environment adaptation based on eigenspace-based MLLR. We also extend the sub-stream based eigenspace-based MLLR to generalize the eigenspace-based MLLR with bias compensation. A vocabulary-independent word recognition experiment shows the proposed algorithm is superior to eigenspace-based MLLR regardless of the amount of adaptation data in diverse noisy environments. Especially, proposed sub-stream eigenspace-based MLLR with bias compensation yields 67% relative improvement with 10 adaptation words in 10 dB SNR environment, in comparison with the conventional eigenspace-based MLLR.

  • PDF

Integrated Char-Word Embedding on Chinese NER using Transformer (트랜스포머를 이용한 중국어 NER 관련 문자와 단어 통합 임배딩)

  • Jin, ChunGuang;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.415-417
    • /
    • 2021
  • Since the words and words in Chinese sentences are continuous and the length of vocabulary is huge, Chinese NER(Named Entity Recognition) always based on character representation. In recent years, many Chinese research has been reconsidered how to integrate the word information into the Chinese NER model. However, the traditional sequence model has complex structure, the slow inference speed, and an additional dictionary information is needed, which is difficult to implement in the industry. The approach in this paper has the state of the art and parallelizable, which is integrated the char-word embeddings, so that the model learns word information. The proposed model is easy to implement, and outperforms traditional model in terms of speed and efficiency, which is improved f1-score on two dataset.

An Attempt to Measure the Familiarity of Specialized Japanese in the Nursing Care Field

  • Haihong Huang;Hiroyuki Muto;Toshiyuki Kanamaru
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.57-74
    • /
    • 2023
  • Having a firm grasp of technical terms is essential for learners of Japanese for Specific Purposes (JSP). This research aims to analyze Japanese nursing care vocabulary based on objective corpus-based frequency and subjectively rated word familiarity. For this purpose, we constructed a text corpus centered on the National Examination for Certified Care Workers to extract nursing care keywords. The Log-Likelihood Ratio (LLR) was used as the statistical criterion for keyword identification, giving a list of 300 keywords as target words for a further word recognition survey. The survey involved 115 participants of whom 51 were certified care workers (CW group) and 64 were individuals from the general public (GP group). These participants rated the familiarity of the target keywords through crowdsourcing. Given the limited sample size, Bayesian linear mixed models were utilized to determine word familiarity rates. Our study conducted a comparative analysis of word familiarity between the CW group and the GP group, revealing key terms that are crucial for professionals but potentially unfamiliar to the general public. By focusing on these terms, instructors can bridge the knowledge gap more efficiently.