• Title/Summary/Keyword: Word sense disambiguation

Search Result 104, Processing Time 0.03 seconds

Automatic WordNet mapping using word sense disambiguation (의미 애매성 해소를 이용한 WordNet 자동 매핑)

  • 이창기;이근배
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.262-268
    • /
    • 2000
  • 본 논문에서는 어휘 의미 애매성 해소와 영어 대역어 사전 그리고 외국언어에 존재하는 개념체계를 이용하여 한국어 개념체계를 자동으로 구축하는 방법을 기술한다. 본 논문에서 사용하는 방법은 기존의 개념체계 구축 방법들에 비해 적은 노력과 시간을 필요로 한다. 또한 상기한 자동 구축 방법에서 사용하는 어휘 의미 애매성 해소를 위한 6가지 feature도 함께 설명한다.

  • PDF

Determining a Syntactic Case of Auxiliary Postposition for Improving Accuracy of Polysemy Word-Sense-Disambiguation (다의어 분별 정확률 개선을 위한 보조사의 통사격 결정)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.102-104
    • /
    • 2016
  • 하위범주화는 술어와 보어간의 의존 관계를 정의하는 언어정보로서 다의어 태깅이나 이 외에 자연어처리의 다양한 곳에 이용될 수 있다. 그러나 하위범주화에서 다루는 필수논항은 격조사로 표현되어 실제로 한국어에서 자주 나타나는 보조사는 여기에 포함되지 않는다. 이런 문제 때문에 하위범주화네 나타난 격조사만을 그대로 이용하려고 하면 재현율에 큰 문제가 발생하게 된다. 본 논문에서는 문장에서 격조사 대신 보조사가 사용되었을 때 하위범주화의 필수논항으로 인정할 수 있는 방법을 제시하고, 특히 보조사에 적용할 경우에 생기는 이점을 실험으로 증명한다.

  • PDF

Determining a Syntactic Case of Auxiliary Postposition for Improving Accuracy of Polysemy Word-Sense-Disambiguation (다의어 분별 정확률 개선을 위한 보조사의 통사격 결정)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.102-104
    • /
    • 2016
  • 하위범주화는 술어와 보어간의 의존 관계를 정의하는 언어정보로서 다의어 태깅이나 이 외에 자연어처리의 다양한 곳에 이용될 수 있다. 그러나 하위범주화에서 다루는 필수논항은 격조사로 표현되어 실제로 한국어에서 자주 나타나는 보조사는 여기에 포함되지 않는다. 이런 문제 때문에 하위범주화네 나타난 격조사만을 그대로 이용하려고 하면 재현율에 큰 문제가 발생하게 된다. 본 논문에서는 문장에서 격조사 대신 보조사가 사용되었을 때 하위범주화의 필수논항으로 인정할 수 있는 방법을 제시하고, 특히 보조사에 적용할 경우에 생기는 이점을 실험으로 증명한다.

  • PDF

Company Name Discrimination in Tweets using Topic Signatures Extracted from News Corpus

  • Hong, Beomseok;Kim, Yanggon;Lee, Sang Ho
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.4
    • /
    • pp.128-136
    • /
    • 2016
  • It is impossible for any human being to analyze the more than 500 million tweets that are generated per day. Lexical ambiguities on Twitter make it difficult to retrieve the desired data and relevant topics. Most of the solutions for the word sense disambiguation problem rely on knowledge base systems. Unfortunately, it is expensive and time-consuming to manually create a knowledge base system, resulting in a knowledge acquisition bottleneck. To solve the knowledge-acquisition bottleneck, a topic signature is used to disambiguate words. In this paper, we evaluate the effectiveness of various features of newspapers on the topic signature extraction for word sense discrimination in tweets. Based on our results, topic signatures obtained from a snippet feature exhibit higher accuracy in discriminating company names than those from the article body. We conclude that topic signatures extracted from news articles improve the accuracy of word sense discrimination in the automated analysis of tweets.

Corpus-Based Ontology Learning for Semantic Analysis (의미 분석을 위한 말뭉치 기반의 온톨로지 학습)

  • 강신재
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.1
    • /
    • pp.17-23
    • /
    • 2004
  • This paper proposes to determine word senses in Korean language processing by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

  • PDF

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

A Korean Homonym Disambiguation System Using Refined Semantic Information and Thesaurus (정제된 의미정보와 시소러스를 이용한 동형이의어 분별 시스템)

  • Kim Jun-Su;Ock Cheol-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.829-840
    • /
    • 2005
  • Word Sense Disambiguation(WSD) is one of the most difficult problem in Korean information processing. We propose a WSD model with the capability to filter semantic information using the specific characteristics in dictionary dictions, and nth added information, useful to sense determination, such as statistical, distance and case information. we propose a model, which can resolve the issues resulting from the scarcity of semantic information data based on the word hierarchy system (thesaurus) developed by Ulsan University's UOU Word Intelligent Network, a dictionary-based toxicological database. Among the WSD models elaborated by this study, the one using statistical information, distance and case information along with the thesaurus (hereinafter referred to as 'SDJ-X model') performed the best. In an experiment conducted on the sense-tagged corpus consisting of 1,500,000 eojeols, provided by the Sejong project, the SDJ-X model recorded improvements over the maximum frequency word sense determination (maximum frequency determination, MFC, accuracy baseline) of $18.87\%$ ($21.73\%$ for nouns and inter-eojeot distance weights by $10.49\%$ ($8.84\%$ for nouns, $11.51\%$ for verbs). Finally, the accuracy level of the SDJ-X model was higher than that recorded by the model using only statistical information, distance and case information, without the thesaurus by a margin of $6.12\%$ ($5.29\%$ for nouns, $6.64\%$ for verbs).

A Korean Homonym Disambiguation Model Based on Statistics Using Weights (가중치를 이용한 통계 기반 한국어 동형이의어 분별 모델)

  • 김준수;최호섭;옥철영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1112-1123
    • /
    • 2003
  • WSD(word sense disambiguation) is one of the most difficult problems in Korean information processing. The Bayesian model that used semantic information, extracted from definition corpus(1 million POS-tagged eojeol, Korean dictionary definitions), resulted in accuracy of 72.08% (nouns 78.12%, verbs 62.45%). This paper proposes the statistical WSD model using NPH(New Prior Probability of Homonym sense) and distance weights. We select 46 homonyms(30 nouns, 16 verbs) occurred high frequency in definition corpus, and then we experiment the model on 47,977 contexts from ‘21C Sejong Corpus’(3.5 million POS-tagged eojeol). The WSD model using NPH improves on accuracy to average 1.70% and the one using NPH and distance weights improves to 2.01%.

An Enhanced Method for Unsupervised Word Sense Disambiguation using Korean WordNet (한국어 어휘의미망을 이용한 비감독 어의 중의성 해소 방법의 성능 향상)

  • Kwon, Soonho;Kim, Minho;Kwon, Hyuk-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.693-696
    • /
    • 2010
  • 자연언어처리에서 어의 중의성 해소(word sense disambiguation)는 어휘의 의미를 정확하게 파악하는 기술로 기계번역, 정보검색과 같은 여러 응용 분야에서 중요한 역할을 한다. 본 논문에서는 한국어 어휘의미망(Korlex)을 이용한 비감독 어의 중의성 해소 방법을 제안한다. 의미미부착 말뭉치에서 추출한 통계 정보와 한국어 어휘의미망의 관계어 정보를 이용함으로써 자료 부족문제를 완화하였다. 또한, 중의성 어휘와 공기어휘 간의 거리 가중치, 의미별 사용 정보 가중치를 사용하여 언어적인 특징을 고려하여 본 논문의 기반이 되는 PNUWSD 시스템보다 성능을 향상하였다. 본 논문에서 제안하는 어의 중의성 해소 방법의 평가를 위해 SENSEVAL-2 한국어 데이터를 이용하였다. 중의성 어휘의 의미별 관계어와 지역 문맥 내 공기어휘 간의 카이제곱을 이용하였을 때 68.1%의 정확도를 보였고, 중의성 어휘와 공기어휘 간의 거리 가중치와 의미별 사용 정보 가중치를 사용하였을 때 76.9% 정확도를 보여 기존의 방법보다 정확도를 향상하였다.