• Title/Summary/Keyword: 용어가중치

Search Result 97, Processing Time 0.024 seconds

Automatic Document Classification by Term-Weighting Method (범주 대표어의 가중치 계산 방식에 의한 자동 문서 분류 시스템)

  • 이경찬;강승식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.475-477
    • /
    • 2002
  • 자동 문서 분류는 범주 특성 벡터와 입력 문서 벡터의 유사도 비교에 의해 가장 유사한 범주를 선택하는 방법이다. 문서 분류 시스템을 구현하기 위하여 각 범주의 특성 벡터를 정보 검색 시스템의 역파일 형태로 구축하였으며, 용어 가중치를 계산하는 방법을 달리하여 문서 분류 시스템의 정확도를 실험하였다. 실험 문서는 일간지의 신문기사들을 무작위로 추출한 문서 집합을 대상으로 하였으며, 정보 검색 모델에서 보편적으로 사용되는 TF-lDF 방식이 변형된 방식에 비해 더 나은 성능을 보였다.

  • PDF

Determining the Specificity of Terms using Compositional and Contextual Information (구성정보와 문맥정보를 이용한 전문용어의 전문성 측정 방법)

  • Ryu Pum-Mo;Bae Sun-Mee;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.7
    • /
    • pp.636-645
    • /
    • 2006
  • A tenn with more domain specific information has higher level of term specificity. We propose new specificity calculation methods of terms based on information theoretic measures using compositional and contextual information. Specificity of terms is a kind of necessary conditions in tenn hierarchy construction task. The methods use based on compositional and contextual information of terms. The compositional information includes frequency, $tf{\cdot}idf$, bigram and internal structure of the terms. The contextual information of a tenn includes the probabilistic distribution of modifiers of terms. The proposed methods can be applied to other domains without extra procedures. Experiments showed very promising result with the precision of 82.0% when applied to the terms in MeSH thesaurus.

A Comparative Analysis of the Relevance Weighted Boolean Model and the P-NORM Model: An Improvement on the Boolean Retrieval (적합성 가중치 검색 및 P-NORM 검색에 관한 연구 -불 논리 검색의 개선을 중심으로-)

  • 이효숙
    • Journal of the Korean Society for information Management
    • /
    • v.11 no.1
    • /
    • pp.31-56
    • /
    • 1994
  • To evaluate the retrieval effectiveness of the B03lean Request Conversion Mod4 the Relevance Weighted Boolean Model, and the P-NORM Model, the present study has been done with expenmental tests. It is proven that the Relevance Weighted Bdean Model is more effective in precision and the document output ranks than the other ones. The expenmental results indmte a promisii application of relevance mformation and weigh- schemes.

  • PDF

An XML Keyword Indexing Method Using on Lexical Similarity (단락을 분류에 따른 XML 키워드 가중치 결정 기법)

  • Jeong, Hye-Jin;Kim, Hyoung-Jin
    • Proceedings of the KAIS Fall Conference
    • /
    • 2008.05a
    • /
    • pp.205-208
    • /
    • 2008
  • 보다 효과적인 키워드 추출 및 키워드 가중치 결정을 위하여 문서의 내용뿐 아니라 구조를 이용하여 색인을 추출하는 연구가 이루어지고 있는데, 대부분의 연구들이 XML 단락별 중요도가 아닌, 문맥상의 단락에 대한 중요도를 계산하는게 일반적이다. 이러한 기존 연구들은 대부분이 객관적인 실험을 통해서 중요도를 입증하기보다는 일반적인 관점에서 단순한 수치로 중요도를 결정하고 있다. 본 논문에서는 웹 문서 관리를 위한 표준으로 자리잡아가고 있는 XML 문서의 자동색인을 위하여, 논문을 구성하는 주요 단락을 세분하고, 단락에서 추출된 용어의 가중치를 갱신해 가면서 최종 색인어 가중치를 계산하는 방법을 제안한다.

  • PDF

Comparing the Usages of Vocabulary by Medias for Disaster Safety Terminology Construction (재난안전 용어사전 구축을 위한 미디어별 어휘 사용 양상 비교)

  • Lee, Jung-Eun;Kim, Tae-Young;Oh, Hyo-Jung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.229-238
    • /
    • 2018
  • The rapid response of disaster accidents can be archived through the organical involvement of various disaster and safety control agencies. To define the terminology of disaster safety is essential for communication between disaster safety agencies and well as announcement for the public. Also, to efficiently construct a word dictionary of disaster safety terminology, it's necessary to define the priority of the terms. In order to establish direction of word dictionary construction, this paper compares the usage of disaster safety terminology by media: word dictionary, new media, and social media, respectively. Based on the terminology resources collected from each media, we visualized the distribution of terminology according to frequency weights and analyzed co-occurrence patterns. We also classified the types of terminology into four categories and proposed the priority in the construction of disaster safety word dictionary.

A Leveling and Similarity Measure using Extended AHP of Fuzzy Term in Information System (정보시스템에서 퍼지용어의 확장된 AHP를 사용한 레벨화와 유사성 측정)

  • Ryu, Kyung-Hyun;Chung, Hwan-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.212-217
    • /
    • 2009
  • There are rule-based learning method and statistic based learning method and so on which represent learning method for hierarchy relation between domain term. In this paper, we propose to leveling and similarity measure using the extended AHP of fuzzy term in Information system. In the proposed method, we extract fuzzy term in document and categorize ontology structure about it and level priority of fuzzy term using the extended AHP for specificity of fuzzy term. the extended AHP integrates multiple decision-maker for weighted value and relative importance of fuzzy term. and compute semantic similarity of fuzzy term using min operation of fuzzy set, dice's coefficient and Min+dice's coefficient method. and determine final alternative fuzzy term. after that compare with three similarity measure. we can see the fact that the proposed method is more definite than classification performance of the conventional methods and will apply in Natural language processing field.

Wordnet Extension for IT terminology Using Web Search (웹 검색을 활용한 워드넷에서의 IT 전문 용어 확장)

  • Park, Kyeong-Kook;Lee, Kwang-Mo;Kim, Yu-Seop
    • Annual Conference on Human and Language Technology
    • /
    • 2007.10a
    • /
    • pp.189-193
    • /
    • 2007
  • In this paper, we designed a methodology to expand the WordNet. We added unknown terms like IT technical terms to the existing WordNet by using web search. The WordNet is an online taxonomy representing the relationships among terms, but it usually showed limitation to contain new technical terminologies. That's why we tried to expand the WordNet. Firstly, when we met unregistered terms in WordNet, we built a query of those terms for web search. Given a web search results, we tried to find out terms with a high-level relatedness with the unregistered terms. We used the Korean Morphological Analyzer to score the relatedness between terms and located the unregistered term as a hyponym of terms with high score of relatedness.

  • PDF

Hierarchic Document Clustering in OPAC (OPAC에서 자동분류 열람을 위한 계층 클러스터링 연구)

  • 노정순
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.1
    • /
    • pp.93-117
    • /
    • 2004
  • This study is to develop a hierarchic clustering model fur document classification and browsing in OPAC systems. Two automatic indexing techniques (with and without controlled terms), two term weighting methods (based on term frequency and binary weight), five similarity coefficients (Dice, Jaccard, Pearson, Cosine, and Squared Euclidean). and three hierarchic clustering algorithms (Between Average Linkage, Within Average Linkage, and Complete Linkage method) were tested on the document collection of 175 books and theses on library and information science. The best document clusters resulted from the Between Average Linkage or Complete Linkage method with Jaccard or Dice coefficient on the automatic indexing with controlled terms in binary vector. The clusters from Between Average Linkage with Jaccard has more likely decimal classification structure.

Analysis of Scientific Item Networks from Science and Biology Textbooks (고등학교 과학 및 생물교과서 과학용어 네트워크 분석)

  • Park, Byeol-Na;Lee, Yoon-Kyeong;Ku, Ja-Eul;Hong, Young-Soo;Kim, Hak-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.427-435
    • /
    • 2010
  • We extracted core terms by constructing scientific item networks from textbooks, analyzing their structures, and investigating the connected information and their relationships. For this research, we chose three high-school textbooks from different publishers for each three subjects, i.e, Science, Biology I and Biology II, to construct networks by linking scientific items in each sentence, where used items were regarded as nodes. Scientific item networks from all textbooks showed scare-free character. When core networks were established by applying k-core algorithm which is one of generally used methods for removing lesser weighted nodes and links from complex network, they showed the modular structure. Science textbooks formed four main modules of physics, chemistry, biology and earth science, while Biology I and Biology II textbooks revealed core networks composed of more detailed specific items in each field. These findings demonstrate the structural characteristics of networks in textbooks, and suggest core scientific items helpful for students' understanding of concept in Science and Biology.

A Study on the Performance Improvement of Rocchio Classifier with Term Weighting Methods (용어 가중치부여 기법을 이용한 로치오 분류기의 성능 향상에 관한 연구)

  • Kim, Pan-Jun
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.1
    • /
    • pp.211-233
    • /
    • 2008
  • This study examines various weighting methods for improving the performance of automatic classification based on Rocchio algorithm on two collections(LISA, Reuters-21578). First, three factors for weighting are identified as document factor, document factor, category factor for each weighting schemes, the performance of each was investigated. Second, the performance of combined weighting methods between the single schemes were examined. As a result, for the single schemes based on each factor, category-factor-based schemes showed the best performance, document set-factor-based schemes the second, and document-factor-based schemes the worst. For the combined weighting schemes, the schemes(idf*cat) which combine document set factor with category factor show better performance than the combined schemes(tf*cat or ltf*cat) which combine document factor with category factor as well as the common schemes (tfidf or ltfidf) that combining document factor with document set factor. However, according to the results of comparing the single weighting schemes with combined weighting schemes in the view of the collections, while category-factor-based schemes(cat only) perform best on LISA, the combined schemes(idf*cat) which combine document set factor with category factor showed best performance on the Reuters-21578. Therefore for the practical application of the weighting methods, it needs careful consideration of the categories in a collection for automatic classification.