• Title/Summary/Keyword: In Word Probability

Search Result 115, Processing Time 0.025 seconds

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF

A Study on Automatic Measurement of Pronunciation Accuracy of English Speech Produced by Korean Learners of English (한국인 영어 학습자의 발음 정확성 자동 측정방법에 대한 연구)

  • Yun, Weon-Hee;Chung, Hyun-Sung;Jang, Tae-Yeoub
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.17-20
    • /
    • 2005
  • The purpose of this project is to develop a device that can automatically measure pronunciation of English speech produced by Korean learners of English. Pronunciation proficiency will be measured largely in two areas; suprasegmental and segmental areas. In suprasegmental area, intonation and word stress will be traced and compared with those of native speakers by way of statistical methods using tilt parameters. Durations of phones are also examined to measure speakers' naturalness of their pronunciations. In doing so, statistical duration modelling from a large speech database using CART will be considered. For segmental measurement of pronunciation, acoustic probability of a phone, which is a byproduct when doing the forced alignment, will be a basis of scoring pronunciation accuracy of a phone. The final score will be a feedback to the learners to improve their pronunciation.

  • PDF

A Low-Power LSI Design of Japanese Word Recognition System

  • Yoshizawa, Shingo;Miyanaga, Yoshikazu;Wada, Naoya;Yoshida, Norinobu
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.98-101
    • /
    • 2002
  • This paper reports a parallel architecture in a HMM based speech recognition system for a low-power LSI design. The proposed architecture calculates output probability of continuous HMM (CHMM) by using concurrent and pipeline processing. They enable to reduce memory access and have high computing efficiency. The novel point is the efficient use of register arrays that reduce memory access considerably compared with any conventional method. The implemented system can achieve a real time response with lower clock in a middle size vocabulary recognition task (100-1000 words) by using this technique.

  • PDF

Block based Normalized Numeric Image Descriptor (블록기반 정규화 된 이미지 수 표현자)

  • Park, Yu-Yung;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.61-68
    • /
    • 2012
  • This paper describes a normalized numeric image descriptor used to assess the luminance and contrast of the image. The proposed image descriptor used the each pixel data as weighted value of the probability density function (PDF) and defined by normalization in order to objective represent. The proposed image numeric descriptor can be used to the adaptive gamma process because it suggests the objective basis of the gamma value selection.

A Study on the Frequency Hopping Code Division Multiple Access System (주파수도약 부호분할다원접속 방식에 관한 연구)

  • 한경섭;한영열;심수보
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.12 no.5
    • /
    • pp.535-542
    • /
    • 1987
  • In this paper, Modified system of the asynchronous repeat FH/MFSK transmission system is proposed. By transmitting data bits less that address bits, we can improue the system performance, we have evaluated the probability of word error of modified system and compared with the conventional system. And it is found that the modified system shows the improved performance. Also the structure of the receiver is remarkably simplified with the comparison of the conventional system.

  • PDF

The MeSH-Term Query Expansion Models using LDA Topic Models in Health Information Retrieval (MeSH 기반의 LDA 토픽 모델을 이용한 검색어 확장)

  • You, Sukjin
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.1
    • /
    • pp.79-108
    • /
    • 2021
  • Information retrieval in the health field has several challenges. Health information terminology is difficult for consumers (laypeople) to understand. Formulating a query with professional terms is not easy for consumers because health-related terms are more familiar to health professionals. If health terms related to a query are automatically added, it would help consumers to find relevant information. The proposed query expansion (QE) models show how to expand a query using MeSH terms. The documents were represented by MeSH terms (i.e. Bag-of-MeSH), found in the full-text articles. And then the MeSH terms were used to generate LDA (Latent Dirichlet Analysis) topic models. A query and the top k retrieved documents were used to find MeSH terms as topic words related to the query. LDA topic words were filtered by threshold values of topic probability (TP) and word probability (WP). Threshold values were effective in an LDA model with a specific number of topics to increase IR performance in terms of infAP (inferred Average Precision) and infNDCG (inferred Normalized Discounted Cumulative Gain), which are common IR metrics for large data collections with incomplete judgments. The top k words were chosen by the word score based on (TP *WP) and retrieved document ranking in an LDA model with specific thresholds. The QE model with specific thresholds for TP and WP showed improved mean infAP and infNDCG scores in an LDA model, comparing with the baseline result.

A Study on Keyword Spotting System Using Pseudo N-gram Language Model (의사 N-gram 언어모델을 이용한 핵심어 검출 시스템에 관한 연구)

  • 이여송;김주곤;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.242-247
    • /
    • 2004
  • Conventional keyword spotting systems use the connected word recognition network consisted by keyword models and filler models in keyword spotting. This is why the system can not construct the language models of word appearance effectively for detecting keywords in large vocabulary continuous speech recognition system with large text data. In this paper to solve this problem, we propose a keyword spotting system using pseudo N-gram language model for detecting key-words and investigate the performance of the system upon the changes of the frequencies of appearances of both keywords and filler models. As the results, when the Unigram probability of keywords and filler models were set to 0.2, 0.8, the experimental results showed that CA (Correctly Accept for In-Vocabulary) and CR (Correctly Reject for Out-Of-Vocabulary) were 91.1% and 91.7% respectively, which means that our proposed system can get 14% of improved average CA-CR performance than conventional methods in ERR (Error Reduction Rate).

HMM-based Speech Recognition using DMS Model and Fuzzy Concept (DMS 모델과 퍼지 개념을 이용한 HMM에 기초를 둔 음성 인식)

  • Ann, Tae-Ock
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.964-969
    • /
    • 2008
  • This paper proposes a HMM-based recognition method using DMSVQ(Dynamic Multi-Section Vector Quantization) codebook by DMS(Dynamic Multi-Section) model and fuzzy concept, as a study for speaker- independent speech recognition. In this proposed recognition method, training data are divided into several dynamic section and multi-observation sequences which are given proper probabilities by fuzzy rule according to order of short distance from DMSVQ codebook per each section are obtained. Thereafter, the HMM using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. Other experiments to compare with the results of recognition experiments using proposed method are implemented as a data by the various conventional recognition methods under the equivalent environment. Through the experiment results, it is proved that the proposed method in this study is superior to the conventional recognition methods.

HMM-based Speech Recognition using FSVQ and Fuzzy Concept (FSVQ와 퍼지 개념을 이용한 HMM에 기초를 둔 음성 인식)

  • 안태옥
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.90-97
    • /
    • 2003
  • This paper proposes a speech recognition based on HMM(Hidden Markov Model) using FSVQ(First Section Vector Quantization) and fuzzy concept. In the proposed paper, we generate codebook of First Section, and then obtain multi-observation sequences by order of large propabilistic values based on fuzzy rule from the codebook of the first section. Thereafter, this observation sequences of first section from codebooks is trained and in case of recognition, a word that has the most highest probability of first section is selected as a recognized word by same concept. Train station names are selected as the target recognition vocabulary and LPC cepstrum coefficients are used as the feature parameters. Besides the speech recognition experiments of proposed method, we experiment the other methods under same conditions and data. Through the experiment results, it is proved that the proposed method based on HMM using FSVQ and fuzzy concept is superior to tile others in recognition rate.

Automatic Classification of Documents Using Word Correlation (단어의 연관성을 이용한 문서의 자동분류)

  • Sin, Jin-Seop;Lee, Chang-Hun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2422-2430
    • /
    • 1999
  • In this paper, we propose a new method for automatic classification of web documents using the degree of correlation between words. First, we select keywords from term frequency and inverse document frequency (TF*IDF) and compute the degree of relevance between the keywords in the whole documents,, using the probability model word that was closely connected with them and create a profile that characterizes each class. Finally, if we repeat the above process until lower than threshold value, we will make several profiles which are in keeping with users concern. And, we classified each document with the profiles and compared these with those of other automatic classification methods.

  • PDF