• Title/Summary/Keyword: TF-IDF 가중치

Search Result 63, Processing Time 0.026 seconds

A Study on the Applicability of 2-Poisson Model for Selecting Korean Subject Words (2-포아송 모형을 이용한 한글 주제어 선정에 관한 연구)

  • 정영미;최대식
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.1
    • /
    • pp.129-148
    • /
    • 2000
  • Experiments were performed on three subsets of a Korean test collection in order to determine whether 2-Poisson model's Z value is a good measure for selecting subject words from a document to be indexed. It was found that subject word selection based on the Z value was effective for only one subset with short texts, i.e., the Science and Technology subset. Correlation analyses between 2-Poisson model's Z and TF.IDF weight for the three subsets showed that the correlation was relatively high for two test subsets with short texts, i.e., the Science and Technology subset and the Newspaper subset.

  • PDF

Automatic Keyword Extraction in News Articles for Trend Tracking (키워드 가중치를 이용한 뉴스 기사에서의 이슈 키워드 자동 추출 시스템)

  • Kim, Miji;Lee, Jaewon;Jang, Dalwon;Lee, JongSeol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.150-152
    • /
    • 2018
  • 본 논문에서는 포털 사이트에 게재된 뉴스 기사 집합에서 이슈가 된 키워드들을 자동으로 추출하는 시스템을 소개한다. 포털 사이트에서 사용하는 기존의 키워드 추출 시스템은 검색 횟수를 기반으로 하고 있으며, 뉴스 기사에서 단어 간의 상대적 중요성을 반영하지 못하고, 외부로부터 영향을 받아 순위 조작과 같은 문제점을 수반할 수 있다. 제안하는 시스템에선 TF-IDF 모델을 사용하여 단어 간의 상대적인 중요성에 기반하고, 추출된 키워드들의 시각적 변화를 반영하여 이슈 키워드를 추출한다. 제안한 시스템의 효용성 확인을 위해 58,996 개의 정치 뉴스 기사를 수집하였으며, TF-IDF 기반의 제안 방식과 TF 기반의 기존 방식을 비교하였다. 제안한 시스템이 기존 방식보다 시간에 따른 정치 뉴스의 이슈 변화를 분석하는 데 효과적인 것을 확인하였다.

  • PDF

Keyword Weight based Paragraph Extraction Algorithm (문단 가중치 분석 기반 본문 영역 선정 알고리즘)

  • Lee, Jongwon;Yu, Seongjong;Kim, Doan;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.462-463
    • /
    • 2018
  • Traditional document analysis systems used word-based analysis using a morphological analyzer or TF-IDF technique. These systems have the advantage of being able to derive key keywords by calculating the weights of the keywords. On the other hand, it is not appropriate to analyze the contents of documents due to the structural limitations. To solve this problem, the proposed algorithm calculates the weights of the documents in the document and divides the paragraphs into areas. And we calculate the importance of the divided regions and let the user know the area with the most important paragraphs in the document. So, it is expected that the user will be provided with a service suitable for analyzing documents rather than using existing document analysis systems.

  • PDF

A New Semantic Distance Measurement Method using TF-IDF in Linked Open Data (링크드 오픈 데이터에서 TF-IDF를 이용한 새로운 시맨틱 거리 측정 기법)

  • Cho, Jung-Gil
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2020
  • Linked Data allows structured data to be published in a standard way that datasets from various domains can be interlinked. With the rapid evolution of Linked Open Data(LOD), researchers are exploiting it to solve particular problems such as semantic similarity assessment. In this paper, we propose a method, on top of the basic concept of Linked Data Semantic Distance (LDSD), for calculating the Linked Data semantic distance between resources that can be used in the LOD-based recommender system. The semantic distance measurement model proposed in this paper is based on a similarity measurement that combines the LOD-based semantic distance and a new link weight using TF-IDF, which is well known in the field of information retrieval. In order to verify the effectiveness of this paper's approach, performance was evaluated in the context of an LOD-based recommendation system using mixed data of DBpedia and MovieLens. Experimental results show that the proposed method shows higher accuracy compared to other similar methods. In addition, it contributed to the improvement of the accuracy of the recommender system by expanding the range of semantic distance calculation.

Spam Filter by Using X2 Statistics and Support Vector Machines (카이제곱 통계량과 지지벡터기계를 이용한 스팸메일 필터)

  • Lee, Song-Wook
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.249-254
    • /
    • 2010
  • We propose an automatic spam filter for e-mail data using Support Vector Machines(SVM). We use a lexical form of a word and its part of speech(POS) tags as features and select features by chi square statistics. We represent each feature by TF(text frequency), TF-IDF, and binary weight for experiments. After training SVM with the selected features, SVM classifies each e-mail as spam or not. In experiment, the selected features improve the performance of our system and we acquired overall 98.9% of accuracy with TREC05-p1 spam corpus.

A Document Sentiment Classification System Based on the Feature Weighting Method Improved by Measuring Sentence Sentiment Intensity (문장 감정 강도를 반영한 개선된 자질 가중치 기법 기반의 문서 감정 분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.491-497
    • /
    • 2009
  • This paper proposes a new feature weighting method for document sentiment classification. The proposed method considers the difference of sentiment intensities among sentences in a document. Sentiment features consist of sentiment vocabulary words and the sentiment intensity scores of them are estimated by the chi-square statistics. Sentiment intensity of each sentence can be measured by using the obtained chi-square statistics value of each sentiment feature. The calculated intensity values of each sentence are finally applied to the TF-IDF weighting method for whole features in the document. In this paper, we evaluate the proposed method using support vector machine. Our experimental results show that the proposed method performs about 2.0% better than the baseline which doesn't consider the sentiment intensity of a sentence.

Event Sentence Extraction for Online Trend Analysis (온라인 동향 분석을 위한 이벤트 문장 추출 방안)

  • Yun, Bo-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.9
    • /
    • pp.9-15
    • /
    • 2012
  • A conventional event sentence extraction research doesn't learn the 3W features in the learning step and applies the rule on whether the 3W feature exists in the extraction step. This paper presents a sentence weight based event sentence extraction method that calculates the weight of the 3W features in the learning step and applies the weight of the 3W features in the extraction step. In the experimental result, we show that top 30% features by the $TF{\times}IDF$ weighting method is good in the feature filtering. In the real estate domain of the public issue, the performance of sentence weight based event sentence extraction method is improved by who and when of 3W features. Moreover, In the real estate domain of the public issue, the sentence weight based event sentence extraction method is better than the other machine learning based extraction method.

Machine Learning Based Blog Text Opinion Classification System Using Opinion Word Centered-Dependency Tree Pattern Features (의견어중심의 의존트리패턴자질을 이용한 기계학습기반 한국어 블로그 문서 의견분류시스템)

  • Kwak, Dong-Min;Lee, Seung-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.337-338
    • /
    • 2009
  • 블로그문서의 의견극성분류 연구는 주로 기계학습기법에 기반한 방법이었고, 이때 주로 활용된 자질은 명사, 동사 등의 품사정보와 의견어 어휘정보였다. 하지만 하나의 의견어 어휘만을 고려한다면 그 극성을 판별하는데 필요한 정보가 충분하지 않아 부정확한 결과를 도출하는 경우가 발생할 수 있다. 본 논문에서는 여러 어휘를 동시에 고려하였을 때 보다 정확한 의견분류를 수행할 수 있을 것이라는 가정을 세웠다. 본 논문에서는 효과적인 의견어휘자질의 추출을 위하여 의견이 내포될 가능성이 높은 의견어휘를 기반으로 의존구문분석을 통해 의존트리패턴을 추출하였고, 제안하는 PF-IDF가중치를 적용하여 지지벡터기계(SVM)와 다항시행접근 단순베이지안(MNNB)알고리즘으로 비교 실험을 수행하였다. 기준시스템인 TF-IDF가중치 기법에 비해 정확도(accuracy)가 지지벡터기계에서 5%, 다항시행접근 단순베이지안에서 8.9% 향상된 성능을 보였다.

Term Weighting Using Date Information and Its Appliance in Automatic Text Classification (날짜 정보를 이용한 가중치 계산 방법을 적용한 자동 문서분류)

  • Shim, Bojun;Park, Jinwoo;Seo, Jungyun
    • Annual Conference on Human and Language Technology
    • /
    • 2007.10a
    • /
    • pp.169-173
    • /
    • 2007
  • 문장을 구성하는 단어들은 문장의 의미를 표출하는 데에 있어서 모두 같은 크기의 중요도를 갖지는 않는다. 따라서, 정보검색 분야에서는 오랫동안 단어에 부여할 서로 다른 가중치를 구하는 다양한 전략을 연구해 왔다. 매우 일반적인 기능어들은 불용어로 분류하여 고려 대상에서 제외하기도 하고, 개체명 추출기를 이용하여 고유명사에 높은 가중치를 부여하거나, TF-IDF와 같이 단어가 문서 집합에 출현하는 양상과 빈도를 고려하여 가중치를 구하는 전략을 사용하기도 한다. 이와 같은 연구들에서는 같은 단어라면 어떤 상황에서도 변하지 않는 가중치를 가지게 된다. 본 논문에서는 같은 단어라 할지라도 날짜에 따라서, 어떤 날짜에는 중요한 단어이므로 높은 가중치를 받지만, 다른 날짜에는 낮은 가중치를 부여하는 전략을 제안하고 있다. 이 방법은 모든 정보검색 작업에서 사용할 수 있는 범용적인 전략이다. 본 연구에서는 특히, 문서분류 작업에 제안 방법을 적용했을 때, 제안 방법을 적용하지 않은 기본 시스템보다 분류 정확성이 더 향상되는 것을 실험을 통해서 확인하였다.

  • PDF

A Study on the Deduction of Social Issues Applying Word Embedding: With an Empasis on News Articles related to the Disables (단어 임베딩(Word Embedding) 기법을 적용한 키워드 중심의 사회적 이슈 도출 연구: 장애인 관련 뉴스 기사를 중심으로)

  • Choi, Garam;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.1
    • /
    • pp.231-250
    • /
    • 2018
  • In this paper, we propose a new methodology for extracting and formalizing subjective topics at a specific time using a set of keywords extracted automatically from online news articles. To do this, we first extracted a set of keywords by applying TF-IDF methods selected by a series of comparative experiments on various statistical weighting schemes that can measure the importance of individual words in a large set of texts. In order to effectively calculate the semantic relation between extracted keywords, a set of word embedding vectors was constructed by using about 1,000,000 news articles collected separately. Individual keywords extracted were quantified in the form of numerical vectors and clustered by K-means algorithm. As a result of qualitative in-depth analysis of each keyword cluster finally obtained, we witnessed that most of the clusters were evaluated as appropriate topics with sufficient semantic concentration for us to easily assign labels to them.