• Title/Summary/Keyword: In Word Probability

Search Result 115, Processing Time 0.024 seconds

Isolated Word Recognition Using Segment Probability Model (분할확률 모델을 이용한 한국어 고립단어 인식)

  • 김진영;성경모
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.25 no.12
    • /
    • pp.1541-1547
    • /
    • 1988
  • In this paper, a new model for isolated word recognition called segment probability model is proposed. The proposed model is composed of two procedures of segmentation and modelling each segment. Therefore the spoken word is devided into arbitrary segments and observation probability in each segments is obtained using vector quantization. The proposed model is compared with pattern matching method and hidden Markov model by recognition experiment. The experimental results show that the proposed model is better than exsisting methods in terms of recognition rate and caculation amounts.

  • PDF

A Study on Korean Spoken Language Understanding Model (한국어 구어 음성 언어 이해 모델에 관한 연구)

  • 노용완;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2435-2438
    • /
    • 2003
  • In this paper, we propose a Korean speech understanding model using dictionary and thesaurus. The proposed model search the dictionary for the same word with in input text. If it is not in the dictionary, the proposed model search the high level words in the high level word dictionary based on the thesaurus. We compare the probability of sentence understanding model with threshold probability, and we'll get the speech understanding rate. We evaluated the performance of the sentence speech understanding system by applying twenty questions game. As the experiment results, we got sentence speech understanding accuracy of 79.8%. In this case probability of high level word is 0.9 and threshold probability is 0.38.

  • PDF

A Study on Pseudo N-gram Language Models for Speech Recognition (음성인식을 위한 의사(疑似) N-gram 언어모델에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.16-23
    • /
    • 2001
  • In this paper, we propose the pseudo n-gram language models for speech recognition with middle size vocabulary compared to large vocabulary speech recognition using the statistical n-gram language models. The proposed method is that it is very simple method, which has the standard structure of ARPA and set the word probability arbitrary. The first, the 1-gram sets the word occurrence probability 1 (log likelihood is 0.0). The second, the 2-gram also sets the word occurrence probability 1, which can only connect the word start symbol and WORD, WORD and the word end symbol . Finally, the 3-gram also sets the ward occurrence probability 1, which can only connect the word start symbol , WORD and the word end symbol . To verify the effectiveness of the proposed method, the word recognition experiments are carried out. The preliminary experimental results (off-line) show that the word accuracy has average 97.7% for 452 words uttered by 3 male speakers. The on-line word recognition results show that the word accuracy has average 92.5% for 20 words uttered by 20 male speakers about stock name of 1,500 words. Through experiments, we have verified the effectiveness of the pseudo n-gram language modes for speech recognition.

  • PDF

Speech Verification using Similar Word Information in Isolated Word Recognition (고립단어 인식에 유사단어 정보를 이용한 단어의 검증)

  • 백창흠;이기정홍재근
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1255-1258
    • /
    • 1998
  • Hidden Markov Model (HMM) is the most widely used method in speech recognition. In general, HMM parameters are trained to have maximum likelihood (ML) for training data. This method doesn't take account of discrimination to other words. To complement this problem, this paper proposes a word verification method by re-recognition of the recognized word and its similar word using the discriminative function between two words. The similar word is selected by calculating the probability of other words to each HMM. The recognizer haveing discrimination to each word is realized using the weighting to each state and the weighting is calculated by genetic algorithm.

  • PDF

An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation (한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법)

  • Lee, Ha-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1588-1597
    • /
    • 1996
  • This paper describes an estimation method for lexical probabilities in Korean lexical disambiguation. In the stochastic to lexical disambiguation lexical probabilities and contextual probabilities are generally estimated on the basis of statistical data extracted form corpora. It is desirable to apply lexical probabilities in terms of word phrases for Korean because sentences are spaced in the unit of word phrase. However, Korean word phrases are so multiform that there are more or less chances that lexical probabilities cannot be estimated directly in terms of word phrases though fairly large corpora are used. To overcome this problem, similarity for word phrases is defined from the lexical analysis point of view in this research and an estimation method for Korean lexical probabilities based on the similarity is proposed. In this method, when a lexical probability for a word phrase cannot be estimated directly, it is estimated indirectly through the word phrase similar to the given one. Experimental results show that the proposed approach is effective for Korean lexical disambiguation.

  • PDF

Feature Extraction of Web Document using Association Word Mining (연관 단어 마이닝을 사용한 웹문서의 특징 추출)

  • 고수정;최준혁;이정현
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.351-361
    • /
    • 2003
  • The previous studies to extract features for document through word association have the problems of updating profiles periodically, dealing with noun phrases, and calculating the probability for indices. We propose more effective feature extraction method which is using association word mining. The association word mining method, by using Apriori algorithm, represents a feature for document as not single words but association-word-vectors. Association words extracted from document by Apriori algorithm depend on confidence, support, and the number of composed words. This paper proposes an effective method to determine confidence, support, and the number of words composing association words. Since the feature extraction method using association word mining does not use the profile, it need not update the profile, and automatically generates noun phrase by using confidence and support at Apriori algorithm without calculating the probability for index. We apply the proposed method to document classification using Naive Bayes classifier, and compare it with methods of information gain and TFㆍIDF. Besides, we compare the method proposed in this paper with document classification methods using index association and word association based on the model of probability, respectively.

A Stochastic Word-Spacing System Based on Word Category-Pattern (어절 내의 형태소 범주 패턴에 기반한 통계적 자동 띄어쓰기 시스템)

  • Kang, Mi-Young;Jung, Sung-Won;Kwon, Hyuk-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.965-978
    • /
    • 2006
  • This paper implements an automatic Korean word-spacing system based on word-recognition using morpheme unigrams and the pattern that the categories of those morpheme unigrams share within a candidate word. Although previous work on Korean word-spacing models has produced the advantages of easy construction and time efficiency, there still remain problems, such as data sparseness and critical memory size, which arise from the morpho-typological characteristics of Korean. In order to cope with both problems, our implementation uses the stochastic information of morpheme unigrams, and their category patterns, instead of word unigrams. A word's probability in a sentence is obtained based on morpheme probability and the weight for the morpheme's category within the category pattern of the candidate word. The category weights are trained so as to minimize the error means between the observed probabilities of words and those estimated by words' individual-morphemes' probabilities weighted according to their categories' powers in a given word's category pattern.

A Generation-based Text Steganography by Maintaining Consistency of Probability Distribution

  • Yang, Boya;Peng, Wanli;Xue, Yiming;Zhong, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4184-4202
    • /
    • 2021
  • Text steganography combined with natural language generation has become increasingly popular. The existing methods usually embed secret information in the generated word by controlling the sampling in the process of text generation. A candidate pool will be constructed by greedy strategy, and only the words with high probability will be encoded, which damages the statistical law of the texts and seriously affects the security of steganography. In order to reduce the influence of the candidate pool on the statistical imperceptibility of steganography, we propose a steganography method based on a new sampling strategy. Instead of just consisting of words with high probability, we select words with relatively small difference from the actual sample of the language model to build a candidate pool, thus keeping consistency with the probability distribution of the language model. What's more, we encode the candidate words according to their probability similarity with the target word, which can further maintain the probability distribution. Experimental results show that the proposed method can outperform the state-of-the-art steganographic methods in terms of security performance.

Word Verification using Similar Word Information and State-Weights of HMM using Genetic Algorithmin (유사단어 정보와 유전자 알고리듬을 이용한 HMM의 상태하중값을 사용한 단어의 검증)

  • Kim, Gwang-Tae;Baek, Chang-Heum;Hong, Jae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.97-103
    • /
    • 2001
  • Hidden Markov Model (HMM) is the most widely used method in speech recognition. In general, HMM parameters are trained to have maximum likelihood (ML) for training data. Although the ML method has good performance, it dose not take account into discrimination to other words. To complement this problem, a word verification method by re-recognition of the recognized word and its similar word using the discriminative function of the two words. To find the similar word, the probability of other words to the HMM is calculated and the word showing the highest probability is selected as the similar word of the mode. To achieve discrimination to each word the weight to each state is appended to the HMM parameter. The weight is calculated by genetic algorithm. The verificator complemented discrimination of each word and reduced the error occurred by similar word. As a result of verification the total error is reduced by about 22%

  • PDF

Building a Korean-English Parallel Corpus by Measuring Sentence Similarities Using Sequential Matching of Language Resources and Topic Modeling (언어 자원과 토픽 모델의 순차 매칭을 이용한 유사 문장 계산 기반의 위키피디아 한국어-영어 병렬 말뭉치 구축)

  • Cheon, JuRyong;Ko, YoungJoong
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.901-909
    • /
    • 2015
  • In this paper, to build a parallel corpus between Korean and English in Wikipedia. We proposed a method to find similar sentences based on language resources and topic modeling. We first applied language resources(Wiki-dictionary, numbers, and online dictionary in Daum) to match word sequentially. We construct the Wiki-dictionary using titles in Wikipedia. In order to take advantages of the Wikipedia, we used translation probability in the Wiki-dictionary for word matching. In addition, we improved the accuracy of sentence similarity measuring method by using word distribution based on topic modeling. In the experiment, a previous study showed 48.4% of F1-score with only language resources based on linear combination and 51.6% with the topic modeling considering entire word distributions additionally. However, our proposed methods with sequential matching added translation probability to language resources and achieved 9.9% (58.3%) better result than the previous study. When using the proposed sequential matching method of language resources and topic modeling after considering important word distributions, the proposed system achieved 7.5%(59.1%) better than the previous study.