• Title/Summary/Keyword: Corpus-based Dictionary

Search Result 55, Processing Time 0.026 seconds

An Improved Homonym Disambiguation Model based on Bayes Theory (Bayes 정리에 기반한 개선된 동형이의어 분별 모텔)

  • 김창환;이왕우
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1581-1590
    • /
    • 2001
  • This paper asserted more developmental model of WSD(word sense disambiguation) than J. Hur(2000)'s WSD model. This model suggested an improved statistical homonym disambiguation Model based on Bayes Theory. This paper using semantic information(co-occurrence data) obtained from definitions of part of speech(POS) tagged UMRD-S(Ulsan university Machine Readable Dictionary(Semantic Tagged)). we extracted semantic features in the context as nouns, predicates and adverbs from the definitions in the korean dictionary. In this research, we make an experiment with the accuracy of WSD system about major nine homonym nouns and new seven homonym predicates supplementary. The inner experimental result showed average accuracy of 98.32% with regard to the most Nine homonym nouns and 99.53% for the Seven homonym predicates. An Addition, we save test on Korean Information Base and ETRI's POS tagged corpus. This external experimental result showed average accuracy of 84.42% with regard to the most Nine nouns over unsupervised learning sentences from Korean Information Base and ETRI Corpus, 70.81 % accuracy rate for the Seven predicates from Sejong Project phrase part tagging corpus (3.5 million phrases) too.

  • PDF

A Study of the Automatic Extraction of Hypernyms arid Hyponyms from the Corpus (코퍼스를 이용한 상하위어 추출 연구)

  • Pang, Chan-Seong;Lee, Hae-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.2
    • /
    • pp.143-161
    • /
    • 2008
  • The goal of this paper is to extract the hyponymy relation between words in the corpus. Adopting the basic algorithm of Hearst (1992), I propose a method of pattern-based extraction of semantic relations from the corpus. To this end, I set up a list of hypernym-hyponym pairs from Sejong Electronic Dictionary. This list is supplemented with the superordinate-subordinate terms of CoroNet. Then, I extracted all the sentences from the corpus that include hypemym-hyponym pairs of the list. From these extracted sentences, I collected all the sentences that contain meaningful constructions that occur systematically in the corpus. As a result, we could obtain 21 generalized patterns. Using the PERL program, we collected sentences of each of the 21 patterns. 57% of the sentences are turned out to have hyponymy relation. The proposed method in this paper is simpler and more advanced than that in Cederberg and Widdows (2003), in that using a word net or an electronic dictionary is generally considered to be efficient for information retrieval. The patterns extracted by this method are helpful when we look fer appropriate documents during information retrieval, and they are used to expand the concept networks like ontologies or thesauruses. However, the word order of Korean is relatively free and it is difficult to capture various expressions of a fired pattern. In the future, we should investigate more semantic relations than hyponymy, so that we can extract various patterns from the corpus.

  • PDF

A Method of Intonation Modeling for Corpus-Based Korean Speech Synthesizer (코퍼스 기반 한국어 합성기의 억양 구현 방안)

  • Kim, Jin-Young;Park, Sang-Eon;Eom, Ki-Wan;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.193-208
    • /
    • 2000
  • This paper describes a multi-step method of intonation modeling for corpus-based Korean speech synthesizer. We selected 1833 sentences considering various syntactic structures and built a corresponding speech corpus uttered by a female announcer. We detected the pitch using laryngograph signals and manually marked the prosodic boundaries on recorded speech, and carried out the tagging of part-of-speech and syntactic analysis on the text. The detected pitch was separated into 3 frequency bands of low, mid, high frequency components which correspond to the baseline, the word tone, and the syllable tone. We predicted them using the CART method and the Viterbi search algorithm with a word-tone-dictionary. In the collected spoken sentences, 1500 sentences were trained and 333 sentences were tested. In the layer of word tone modeling, we compared two methods. One is to predict the word tone corresponding to the mid-frequency components directly and the other is to predict it by multiplying the ratio of the word tone to the baseline by the baseline. The former method resulted in a mean error of 12.37 Hz and the latter in one of 12.41 Hz, similar to each other. In the layer of syllable tone modeling, it resulted in a mean error rate less than 8.3% comparing with the mean pitch, 193.56 Hz of the announcer, so its performance was relatively good.

  • PDF

Corpus-Based Ontology Learning for Semantic Analysis (의미 분석을 위한 말뭉치 기반의 온톨로지 학습)

  • 강신재
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.1
    • /
    • pp.17-23
    • /
    • 2004
  • This paper proposes to determine word senses in Korean language processing by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

  • PDF

Using Corpora for the Study of Word-Formation: A Case Study in English Negative Prefixation

  • Kwon, Heok-Seung
    • Korean Journal of English Language and Linguistics
    • /
    • v.1 no.3
    • /
    • pp.369-386
    • /
    • 2001
  • This paper will show that traditional approaches to the derivation of different negative words have been of an essentially hypothetical nature, based on either linguists' intuitions or rather scant evidence, and that native-speaker dictionary entries show meaning potentials (rather than meanings) which are in fact linguistic and cognitive prototypes. The purpose of this paper is to demonstrate that using a large corpus of natural language can provide better answers to questions about word-formation (i.e., with particular reference to negative prefixation) than any other source of information.

  • PDF

A Hybrid Method of Verb disambiguation in Machine Translation (기계번역에서 동사 모호성 해결에 관한 하이브리드 기법)

  • Moon, Yoo-Jin;Martha Palmer
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.3
    • /
    • pp.681-687
    • /
    • 1998
  • The paper presents a hybrid mcthod for disambiguation of the verb meaning in the machine translation. The presented verb translation algorithm is to perform the concept-based method and the statistics-based method simultaneously. It uses a collocation dictionary, WordNct and the statistical information extracted from corpus. In the transfer phase of the machine translation, it tries to find the target word of the source verb. If it fails, it refers to Word Net to try to find it by calculating word similarities between the logical constraints of the source sentence and those in the collocation dictionary. At the same time, it refers to the statistical information extracted from corpus to try to find it by calculating co-occurrence similarity knowledge. The experimental result shows that the algorithm performs more accurate verb translation than the other algorithms and improves accuracy of the verb translation by 24.8% compared to the collocation-based method.

  • PDF

Syllable-based Korean POS Tagging Based on Combining a Pre-analyzed Dictionary with Machine Learning (기분석사전과 기계학습 방법을 결합한 음절 단위 한국어 품사 태깅)

  • Lee, Chung-Hee;Lim, Joon-Ho;Lim, Soojong;Kim, Hyun-Ki
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.362-369
    • /
    • 2016
  • This study is directed toward the design of a hybrid algorithm for syllable-based Korean POS tagging. Previous syllable-based works on Korean POS tagging have relied on a sequence labeling method and mostly used only a machine learning method. We present a new algorithm integrating a machine learning method and a pre-analyzed dictionary. We used a Sejong tagged corpus for training and evaluation. While the machine learning engine achieved eojeol precision of 0.964, the proposed hybrid engine achieved eojeol precision of 0.990. In a Quiz domain test, the machine learning engine and the proposed hybrid engine obtained 0.961 and 0.972, respectively. This result indicates our method to be effective for Korean POS tagging.

POSTTS : Corpus Based Korean TTS based on Natural Language Analysis (POSTTS : 자연어 분석을 통한 코퍼스 기반 한국어 TTS)

  • Ha Ju-Hong;Zheng Yu;Kim Byeongchang;Lee Geunbae Lee
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.87-90
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method, i.e. a dictionary-based and rule-based hybrid method, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method.

  • PDF

Corpus Based Unrestricted vocabulary Mandarin TTS (코퍼스 기반 무제한 단어 중국어 TTS)

  • Yu Zheng;Ha Ju-Hong;Kim Byeongchang;Lee Gary Geunbae
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.175-179
    • /
    • 2003
  • In order to produce a high quality (intelligibility and naturalness) synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model. In this paper, we analyzed Chinese texts using a segmentation, POS tagging and unknown word recognition. We present a grapheme-to-phoneme conversion using a dictionary-based and rule-based method. We constructed a prosody model using a probabilistic method and a decision tree-based error correction method. According to the result from the above analysis, we can successfully select and concatenate exact synthesis unit of syllables from the Chinese Synthesis DB.

  • PDF

Transfer Dictionary for A Token Based Transfer Driven Korean-Japanese Machine Translation (토큰기반 변환중심 한일 기계번역을 위한 변환사전)

  • Yang Seungweon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.3
    • /
    • pp.64-70
    • /
    • 2004
  • Korean and Japanese have same structure of sentences because they belong to same family of languages. So, The transfer driven machine translation is most efficient to translate each other. This paper introduce a method which creates a transfer dictionary for Token Based Transfer Driven Koran-Japanese Machine Translation(TB-TDMT). If the transfer dictionaries are created well, we get rid of useless effort for traditional parsing by performing shallow parsing. The semi-parser makes the dependency tree which has minimum information needed output generating module. We constructed the transfer dictionaries by using the corpus obtained from ETRI spoken language database. Our system was tested with 900 utterances which are collected from travel planning domain. The success-ratio of our system is $92\%$ on restricted testing environment and $81\%$ on unrestricted testing environment.

  • PDF