• Title/Summary/Keyword: sense disambiguation

Search Result 112, Processing Time 0.032 seconds

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

A Framework for WordNet-based Word Sense Disambiguation (워드넷 기반의 단어 중의성 해소 프레임워크)

  • Ren, Chulan;Cho, Sehyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.325-331
    • /
    • 2013
  • This paper a framework and method for resolving word sense disambiguation and present the results. In this work, WordNet is used for two different purposes: one as a dictionary and the other as an ontology, containing the hierarchical structure, representing hypernym-hyponym relations. The advantage of this approach is twofold. First, it provides a very simple method that is easily implemented. Second, we do not suffer from the lack of large corpus data which would have been necessary in a statistical method. In the future this can be extended to incorporate other relations, such as synonyms, meronyms, and antonyms.

Noun Sense Identification of Korean Nominal Compounds Based on Sentential Form Recovery

  • Yang, Seong-Il;Seo, Young-Ae;Kim, Young-Kil;Ra, Dong-Yul
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.740-749
    • /
    • 2010
  • In a machine translation system, word sense disambiguation has an essential role in the proper translation of words when the target word can be translated differently depending on the context. Previous research on sense identification has mostly focused on adjacent words as context information. Therefore, in the case of nominal compounds, sense tagging of unit nouns mainly depended on other nouns surrounding the target word. In this paper, we present a practical method for the sense tagging of Korean unit nouns in a nominal compound. To overcome the weakness of traditional methods regarding the data sparseness problem, the proposed method adopts complement-predicate relation knowledge that was constructed for machine translation systems. Our method is based on a sentential form recovery technique, which recognizes grammatical relationships between unit nouns. This technique makes use of the characteristics of Korean predicative nouns. To show that our method is effective on text in general domains, the experiments were performed on a test set randomly extracted from article titles in various newspaper sections.

Homonym Disambiguation based on Mutual Information and Sense-Tagged Compound Noun Dictionary (상호정보량과 복합명사 의미사전에 기반한 동음이의어 중의성 해소)

  • Heo, Jeong;Seo, Hee-Cheol;Jang, Myung-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1073-1089
    • /
    • 2006
  • The goal of Natural Language Processing(NLP) is to make a computer understand a natural language and to deliver the meanings of natural language to humans. Word sense Disambiguation(WSD is a very important technology to achieve the goal of NLP. In this paper, we describe a technology for automatic homonyms disambiguation using both Mutual Information(MI) and a Sense-Tagged Compound Noun Dictionary. Previous research work using word definitions in dictionary suffered from the problem of data sparseness because of the use of exact word matching. Our work overcomes this problem by using MI which is an association measure between words. To reflect language features, the rate of word-pairs with MI values, sense frequency and site of word definitions are used as weights in our system. We constructed a Sense-Tagged Compound Noun Dictionary for high frequency compound nouns and used it to resolve homonym sense disambiguation. Experimental data for testing and evaluating our system is constructed from QA(Question Answering) test data which consisted of about 200 query sentences and answer paragraphs. We performed 4 types of experiments. In case of being used only MI, the result of experiment showed a precision of 65.06%. When we used the weighted values, we achieved a precision of 85.35% and when we used the Sense-Tagged Compound Noun Dictionary, we achieved a precision of 88.82%, respectively.

A Word Sense Disambiguation Method with a Semantic Network (의미네트워크를 이용한 단어의미의 모호성 해결방법)

  • DingyulRa
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.2
    • /
    • pp.225-248
    • /
    • 1992
  • In this paper, word sense disambiguation methods utilizing a knowledge base based on a semantic network are introduced. The basic idea is to keep track of a set of paths in the knowledge base which correspond to the inctemental semantic interpretation of a input sentence. These paths are called the semantic paths. when the parser reads a word, the senses of this word which are not involved in any of the semantic paths are removed. Then the removal operation is propagated through the knowledge base to invoke the removal of the senses of other words that have been read before. This removal operation is called recusively as long as senses can be removed. This is called the recursive word sense removal. Concretion of a vague word's concept is one of the important word sense disambiguation methods. We introduce a method called the path adjustment that extends the conctetion operation. How to use semantic association or syntactic processing in coorporation with the above methods is also considered.

Graph-Based Word Sense Disambiguation Using Iterative Approach (반복적 기법을 사용한 그래프 기반 단어 모호성 해소)

  • Kang, Sangwoo
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.2
    • /
    • pp.102-110
    • /
    • 2017
  • Current word sense disambiguation techniques employ various machine learning-based methods. Various approaches have been proposed to address this problem, including the knowledge base approach. This approach defines the sense of an ambiguous word in accordance with knowledge base information with no training corpus. In unsupervised learning techniques that use a knowledge base approach, graph-based and similarity-based methods have been the main research areas. The graph-based method has the advantage of constructing a semantic graph that delineates all paths between different senses that an ambiguous word may have. However, unnecessary semantic paths may be introduced, thereby increasing the risk of errors. To solve this problem and construct a fine-grained graph, in this paper, we propose a model that iteratively constructs the graph while eliminating unnecessary nodes and edges, i.e., senses and semantic paths. The hybrid similarity estimation model was applied to estimate a more accurate sense in the constructed semantic graph. Because the proposed model uses BabelNet, a multilingual lexical knowledge base, the model is not limited to a specific language.

Word Sense Classification Using Support Vector Machines (지지벡터기계를 이용한 단어 의미 분류)

  • Park, Jun Hyeok;Lee, Songwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.563-568
    • /
    • 2016
  • The word sense disambiguation problem is to find the correct sense of an ambiguous word having multiple senses in a dictionary in a sentence. We regard this problem as a multi-class classification problem and classify the ambiguous word by using Support Vector Machines. Context words of the ambiguous word, which are extracted from Sejong sense tagged corpus, are represented to two kinds of vector space. One vector space is composed of context words vectors having binary weights. The other vector space has vectors where the context words are mapped by word embedding model. After experiments, we acquired accuracy of 87.0% with context word vectors and 86.0% with word embedding model.

A Study on the Computational Model of Word Sense Disambiguation, based on Corpora and Experiments on Native Speaker's Intuition (직관 실험 및 코퍼스를 바탕으로 한 의미 중의성 해소 계산 모형 연구)

  • Kim, Dong-Sung;Choe, Jae-Woong
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.303-321
    • /
    • 2006
  • According to Harris'(1966) distributional hypothesis, understanding the meaning of a word is thought to be dependent on its context. Under this hypothesis about human language ability, this paper proposes a computational model for native speaker's language processing mechanism concerning word sense disambiguation, based on two sets of experiments. Among the three computational models discussed in this paper, namely, the logic model, the probabilistic model, and the probabilistic inference model, the experiment shows that the logic model is first applied fer semantic disambiguation of the key word. Nexr, if the logic model fails to apply, then the probabilistic model becomes most relevant. The three models were also compared with the test results in terms of Pearson correlation coefficient value. It turns out that the logic model best explains the human decision behaviour on the ambiguous words, and the probabilistic inference model tomes next. The experiment consists of two pans; one involves 30 sentences extracted from 1 million graphic-word corpus, and the result shows the agreement rate anong native speakers is at 98% in terms of word sense disambiguation. The other pm of the experiment, which was designed to exclude the logic model effect, is composed of 50 cleft sentences.

  • PDF

A Korean Homonym Disambiguation Model Based on Statistics Using Weights (가중치를 이용한 통계 기반 한국어 동형이의어 분별 모델)

  • 김준수;최호섭;옥철영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1112-1123
    • /
    • 2003
  • WSD(word sense disambiguation) is one of the most difficult problems in Korean information processing. The Bayesian model that used semantic information, extracted from definition corpus(1 million POS-tagged eojeol, Korean dictionary definitions), resulted in accuracy of 72.08% (nouns 78.12%, verbs 62.45%). This paper proposes the statistical WSD model using NPH(New Prior Probability of Homonym sense) and distance weights. We select 46 homonyms(30 nouns, 16 verbs) occurred high frequency in definition corpus, and then we experiment the model on 47,977 contexts from ‘21C Sejong Corpus’(3.5 million POS-tagged eojeol). The WSD model using NPH improves on accuracy to average 1.70% and the one using NPH and distance weights improves to 2.01%.

Topic Level Disambiguation for Weak Queries

  • Zhang, Hui;Yang, Kiduk;Jacob, Elin
    • Journal of Information Science Theory and Practice
    • /
    • v.1 no.3
    • /
    • pp.33-46
    • /
    • 2013
  • Despite limited success, today's information retrieval (IR) systems are not intelligent or reliable. IR systems return poor search results when users formulate their information needs into incomplete or ambiguous queries (i.e., weak queries). Therefore, one of the main challenges in modern IR research is to provide consistent results across all queries by improving the performance on weak queries. However, existing IR approaches such as query expansion are not overly effective because they make little effort to analyze and exploit the meanings of the queries. Furthermore, word sense disambiguation approaches, which rely on textual context, are ineffective against weak queries that are typically short. Motivated by the demand for a robust IR system that can consistently provide highly accurate results, the proposed study implemented a novel topic detection that leveraged both the language model and structural knowledge of Wikipedia and systematically evaluated the effect of query disambiguation and topic-based retrieval approaches on TREC collections. The results not only confirm the effectiveness of the proposed topic detection and topic-based retrieval approaches but also demonstrate that query disambiguation does not improve IR as expected.