• Title/Summary/Keyword: Hypernym

Search Result 19, Processing Time 0.026 seconds

Word Sense Disambiguation Using Knowledge Embedding (지식 임베딩 심층학습을 이용한 단어 의미 중의성 해소)

  • Oh, Dongsuk;Yang, Kisu;Kim, Kuekyeng;Whang, Taesun;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.272-275
    • /
    • 2019
  • 단어 중의성 해소 방법은 지식 정보를 활용하여 문제를 해결하는 지식 기반 방법과 각종 기계학습 모델을 이용하여 문제를 해결하는 지도학습 방법이 있다. 지도학습 방법은 높은 성능을 보이지만 대량의 정제된 학습 데이터가 필요하다. 반대로 지식 기반 방법은 대량의 정제된 학습데이터는 필요없지만 높은 성능을 기대할수 없다. 최근에는 이러한 문제를 보완하기 위해 지식내에 있는 정보와 정제된 학습데이터를 기계학습 모델에 학습하여 단어 중의성 해소 방법을 해결하고 있다. 가장 많이 활용하고 있는 지식 정보는 상위어(Hypernym)와 하위어(Hyponym), 동의어(Synonym)가 가지는 의미설명(Gloss)정보이다. 이 정보의 표상을 기존의 문장의 표상과 같이 활용하여 중의성 단어가 가지는 의미를 파악한다. 하지만 정확한 문장의 표상을 얻기 위해서는 단어의 표상을 잘 만들어줘야 하는데 기존의 방법론들은 모두 문장내의 문맥정보만을 파악하여 표현하였기 때문에 정확한 의미를 반영하는데 한계가 있었다. 본 논문에서는 의미정보와 문맥정보를 담은 단어의 표상정보를 만들기 위해 구문정보, 의미관계 그래프정보를 GCN(Graph Convolutional Network)를 활용하여 임베딩을 표현하였고, 기존의 모델에 반영하여 문맥정보만을 활용한 단어 표상보다 높은 성능을 보였다.

  • PDF

Incremental Enrichment of Ontologies through Feature-based Pattern Variations (자질별 관계 패턴의 다변화를 통한 온톨로지 확장)

  • Lee, Sheen-Mok;Chang, Du-Seong;Shin, Ji-Ae
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.365-374
    • /
    • 2008
  • In this paper, we propose a model to enrich an ontology by incrementally extending the relations through variations of patterns. In order to generalize initial patterns, combinations of features are considered as candidate patterns. The candidate patterns are used to extract relations from Wikipedia, which are sorted out according to reliability based on corpus frequency. Selected patterns then are used to extract relations, while extracted relations are again used to extend the patterns of the relation. Through making variations of patterns in incremental enrichment process, the range of pattern selection is broaden and refined, which can increase coverage and accuracy of relations extracted. In the experiments with single-feature based pattern models, we observe that the features of lexical, headword, and hypernym provide reliable information, while POS and syntactic features provide general information that is useful for enrichment of relations. Based on observations on the feature types that are appropriate for each syntactic unit type, we propose a pattern model based on the composition of features as our ongoing work.

Semi-automatic Ontology Modeling for VOD Annotation for IPTV (IPTV의 VOD 어노테이션을 위한 반자동 온톨로지 모델링)

  • Choi, Jung-Hwa;Heo, Gil;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.548-557
    • /
    • 2010
  • In this paper, we propose a semi-automatic modeling approach of ontology to annotate VOD to realize the IPTV's intelligent searching. The ontology is made by combining partial tree that extracts hypernym, hyponym, and synonym of keywords related to a service domain from WordNet. Further, we add to the partial tree new keywords that are undefined in WordNet, such as foreign words and words written in Chinese characters. The ontology consists of two parts: generic hierarchy and specific hierarchy. The former is the semantic model of vocabularies such as keywords and contents of keywords. They are defined as classes including property restrictions in the ontology. The latter is generated using the reasoning technique by inferring contents of keywords based on the generic hierarchy. An annotation generates metadata (i.e., contents and genre) of VOD based on the specific hierarchy. The generic hierarchy can be applied to other domains, and the specific hierarchy helps modeling the ontology to fit the service domain. This approach is proved as good to generate metadata independent of any specific domain. As a result, the proposed method produced around 82% precision with 2,400 VOD annotation test data.

Detection of Character Emotional Type Based on Classification of Emotional Words at Story (스토리기반 저작물에서 감정어 분류에 기반한 등장인물의 감정 성향 판단)

  • Baek, Yeong Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.131-138
    • /
    • 2013
  • In this paper, I propose and evaluate the method that classifies emotional type of characters with their emotional words. Emotional types are classified as three types such as positive, negative and neutral. They are selected by classification of emotional words that characters speak. I propose the method to extract emotional words based on WordNet, and to represent as emotional vector. WordNet is thesaurus of network structure connected by hypernym, hyponym, synonym, antonym, and so on. Emotion word is extracted by calculating its emotional distance to each emotional category. The number of emotional category is 30. Therefore, emotional vector has 30 levels. When all emotional vectors of some character are accumulated, her/his emotion of a movie can be represented as a emotional vector. Also, thirty emotional categories can be classified as three elements of positive, negative, and neutral. As a result, emotion of some character can be represented by values of three elements. The proposed method was evaluated for 12 characters of four movies. Result of evaluation showed the accuracy of 75%.

Constructing the Semantic Information Model using A Collective Intelligence Approach

  • Lyu, Ki-Gon;Lee, Jung-Yong;Sun, Dong-Eon;Kwon, Dai-Young;Kim, Hyeon-Cheol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.10
    • /
    • pp.1698-1711
    • /
    • 2011
  • Knowledge is often represented as a set of rules or a semantic network in intelligent systems. Recently, ontology has been widely used to represent semantic knowledge, because it organizes thesaurus and hierarchal information between concepts in a particular domain. However, it is not easy to collect semantic relationships among concepts. Much time and expense are incurred in ontology construction. Collective intelligence can be a good alternative approach to solve these problems. In this paper, we propose a collective intelligence approach of Games With A Purpose (GWAP) to collect various semantic resources, such as words and word-senses. We detail how to construct the semantic information model or ontology from the collected semantic resources, constructing a system named FunWords. FunWords is a Korean lexical-based semantic resource collection tool. Experiments demonstrated the resources were grouped as common nouns, abstract nouns, adjective and neologism. Finally, we analyzed their characteristics, acquiring the semantic relationships noted above. Common nouns, with structural semantic relationships, such as hypernym and hyponym, are highlighted. Abstract nouns, with descriptive and characteristic semantic relationships, such as synonym and antonym are underlined. Adjectives, with such semantic relationships, as description and status, illustration - for example, color and sound - are expressed more. Last, neologism, with the semantic relationships, such as description and characteristics, are emphasized. Weighting the semantic relationships with these characteristics can help reduce time and cost, because it need not consider unnecessary or slightly related factors. This can improve the expressive power, such as readability, concentrating on the weighted characteristics. Our proposal to collect semantic resources from the collective intelligence approach of GWAP (our FunWords) and to weight their semantic relationship can help construct the semantic information model or ontology would be a more effective and expressive alternative.

Improvement of Korean Homograph Disambiguation using Korean Lexical Semantic Network (UWordMap) (한국어 어휘의미망(UWordMap)을 이용한 동형이의어 분별 개선)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.71-79
    • /
    • 2016
  • Disambiguation of homographs is an important job in Korean semantic processing and has been researched for long time. Recently, machine learning approaches have demonstrated good results in accuracy and speed. Other knowledge-based approaches are being researched for untrained words. This paper proposes a hybrid method based on the machine learning approach that uses a lexical semantic network. The use of a hybrid approach creates an additional corpus from subcategorization information and trains this additional corpus. A homograph tagging phase uses the hypernym of the homograph and an additional corpus. Experimentation with the Sejong Corpus and UWordMap demonstrates the hybrid method is to be effective with an increase in accuracy from 96.51% to 96.52%.

Learning Rules for Identifying Hypernyms in Machine Readable Dictionaries (기계가독형사전에서 상위어 판별을 위한 규칙 학습)

  • Choi Seon-Hwa;Park Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.171-178
    • /
    • 2006
  • Most approaches for extracting hypernyms of a noun from its definitions in an MRD rely on lexical patterns compiled by human experts. Not only these approaches require high cost for compiling lexical patterns but also it is very difficult for human experts to compile a set of lexical patterns with a broad-coverage because in natural languages there are various expressions which represent same concept. To alleviate these problems, this paper proposes a new method for extracting hypernyms of a noun from its definitions in an MRD. In proposed approach, we use only syntactic (part-of-speech) patterns instead of lexical patterns in identifying hypernyms to reduce the number of patterns with keeping their coverage broad. Our experiment has shown that the classification accuracy of the proposed method is 92.37% which is significantly much better than that of previous approaches.

Representative Labels Selection Technique for Document Cluster using WordNet (문서 클러스터를 위한 워드넷기반의 대표 레이블 선정 방법)

  • Kim, Tae-Hoon;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.18 no.2
    • /
    • pp.61-73
    • /
    • 2017
  • In this paper, we propose a Documents Cluster Labeling method using information content of words in clusters to understand what the clusters imply. To do so, we calculate the weight and frequency of the words. These two measures are used to determine the weight among the words in the cluster. As a nest step, we identify the candidate labels using the WordNet. At this time, the candidate labels are matched to least common hypernym of the words in the cluster. Finally, the representative labels are determined with respect to information content of the words and the weight of the words. To prove the superiority of our method, we perform the heuristic experiment using two kinds of measures, named the suitability of the candidate label ($Suitability_{cl}$) and the appropriacy of representative label ($Appropriacy_{rl}$). In applying the method proposed in this research, in case of suitability of the candidate label, it decreases slightly compared with existing methods, but the computational cost is about 20% of the conventional methods. And we confirmed that appropriacy of the representative label is better results than the existing methods. As a result, it is expected to help data analysts to interpret the document cluster easier.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.