• Title/Summary/Keyword: Document filtering

Search Result 95, Processing Time 0.031 seconds

A Study on Document Filtering Using Naive Bayesian Classifier (베이지안 분류기를 이용한 문서 필터링)

  • Lim Soo-Yeon;Son Ki-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.3
    • /
    • pp.227-235
    • /
    • 2005
  • Document filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. In this paper, we treat document filtering problem as binary document classification problem and we proposed the News Filtering system based on the Bayesian Classifier. For we perform filtering, we make an experiment to find out how many training documents, and how accurate relevance checks are needed.

  • PDF

Feature Filtering Methods for Web Documents Clustering (웹 문서 클러스터링에서의 자질 필터링 방법)

  • Park Heum;Kwon Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.489-498
    • /
    • 2006
  • Clustering results differ according to the datasets and the performance worsens even while using web documents which are manually processed by an indexer, because although representative clusters for a feature can be obtained by statistical feature selection methods, irrelevant features(i.e., non-obvious features and those appearing in general documents) are not eliminated. Those irrelevant features should be eliminated for improving clustering performance. Therefore, this paper proposes three feature-filtering algorithms which consider feature values per document set, together with distribution, frequency, and weights of features per document set: (l) features filtering algorithm in a document (FFID), (2) features filtering algorithm in a document matrix (FFIM), and (3) a hybrid method combining both FFID and FFIM (HFF). We have tested the clustering performance by feature selection using term frequency and expand co link information, and by feature filtering using the above methods FFID, FFIM, HFF methods. According to the results of our experiments, HFF had the best performance, whereas FFIM performed better than FFID.

Document Classification of Small Size Documents Using Extended Relief-F Algorithm (확장된 Relief-F 알고리즘을 이용한 소규모 크기 문서의 자동분류)

  • Park, Heum
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.233-238
    • /
    • 2009
  • This paper presents an approach to the classifications of small size document using the instance-based feature filtering Relief-F algorithm. In the document classifications, we have not always good classification performances of small size document included a few features. Because total number of feature in the document set is large, but feature count of each document is very small relatively, so the similarities between documents are very low when we use general assessment of similarity and classifiers. Specially, in the cases of the classification of web document in the directory service and the classification of the sectors that cannot connect with the original file after recovery hard-disk, we have not good classification performances. Thus, we propose the Extended Relief-F(ERelief-F) algorithm using instance-based feature filtering algorithm Relief-F to solve problems of Relief-F as preprocess of classification. For the performance comparison, we tested information gain, odds ratio and Relief-F for feature filtering and getting those feature values, and used kNN and SVM classifiers. In the experimental results, the Extended Relief-F(ERelief-F) algorithm, compared with the others, performed best for all of the datasets and reduced many irrelevant features from document sets.

Conceptual Object Grouping for Multimedia Document Management

  • Lee, Chong-Deuk;Jeong, Taeg-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.9 no.3
    • /
    • pp.161-165
    • /
    • 2009
  • Increase of multimedia information in Web requires a new method to manage and service multimedia documents efficiently. This paper proposes a conceptual object grouping method by fuzzy filtering, which is automatically constituted based on increase of multimedia documents. The proposed method composes subsumption relations between conceptual objects automatically using fuzzy filtering of the document objects that are extracted from domains. Grouping of such conceptual objects is regarded as subsumption relation which is decided by $\mu$-cut. This paper proposes $\mu$-cut, FAS(Fuzzy Average Similarity) and DSR(Direct Subsumption Relation) to decide fuzzy filtering, which groups related document objects easily. This paper used about 1,000 conceptual objects in the performance test of the proposed method. The simulation result showed that the proposed method had better retrieval performance than those for OGM(Optimistic Genealogy Method) and BGM(Balanced Genealogy Method).

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

  • Yan, Wanying;Guo, Junjun
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.820-831
    • /
    • 2020
  • Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and document-level document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.

Automatic Preference Rating using User Profile in Content-based Collaborative Filtering System (내용 기반 협력적 여과 시스템에서 사용자 프로파일을 이용한 자동 선호도 평가)

  • 고수정;최성용;임기욱;이정현
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1062-1072
    • /
    • 2004
  • Collaborative filtering systems based on {user-document} matrix are effective in recommending web documents to user. But they have a shortcoming of decreasing the accuracy of recommendations by the first rater problem and the sparsity. This paper proposes the automatic preference rating method that generates user profile to solve the shortcoming. The profile in this paper is content-based collaborative user profile. The content-based collaborative user profile is generated by combining a content-based user profile with a collaborative user profile by mutual information method. Collaborative user profile is based on {user-document} matrix in collaborative filtering system, thus, content-based user profile is generated by relevance feedback in content-based filtering systems. After normalizing combined content-based collaborative user profiles, it automatically rates user preference by reflecting normalized profile in {user-document}matrix of collaborative filtering systems. We evaluated our method on a large database of user ratings for web document and it was certified that was more efficient than existent methods.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

FiST: XML Document Filtering by Sequencing Twig Patterns (가지형 패턴의 시퀀스화를 이용한 XML 문서 필터링)

  • Kwon Joon-Ho;Rao Praveen;Moon Bong-Ki;Lee Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.423-436
    • /
    • 2006
  • In recent years, publish-subscribe (pub-sub) systems based on XML document filtering have received much attention. In a typical pub-sub system, subscribing users specify their interest in profiles expressed in the XPath language, and each new content is matched against the user profiles so that the content is delivered only to the interested subscribers. As the number of subscribed users and their profiles can grow very large, the scalability of the system is critical to the success of pub-sub services. In this paper, we propose a novel scalable filtering system called FiST(Filtering by Sequencing Twigs) that transforms twig patterns expressed in XPath and XML documents into sequences using Prufer's method. As a consequence, instead of matching linear paths of twig patterns individually and merging the matches during post-processing, FiST performs holistic matching of twig patterns with incoming documents. FiST organizes the sequences into a dynamic hash based index for efficient filtering. We demonstrate that our holistic matching approach yields lower filtering cost and good scalability under various situations.

Frameworks for Context Recognition in Document Filtering and Classification

  • Kim Haeng-Kon;Yang Hae-Sool
    • The Journal of Information Systems
    • /
    • v.14 no.3
    • /
    • pp.82-88
    • /
    • 2005
  • Much information has been hierarchically organized to facilitate information browsing, retrieval, and dissemination. In practice, much information may be entered at any time, but only a small subset of the information may be classified into some categories in a hierarchy. Therefore, achieving document filtering (DF) in the course of document classification (DC) is an essential basis to develop an information center, which classifies suitable documents into suitable categories, reducing information overload while facilitating information sharing. In this paper, we present a technique ICenter, which conducts DF and DC by recognizing the context of discussion (COD) of each document and category. Experiments on real-world data show that, through COD recognition, the performance of ICenter is significantly better. The results are of theoretical and practical significance. ICenter may server as an essential basis to develop an information center for a user community, which shares and organizes a hierarchy of textual information.

  • PDF

A Study on the Improvement of Retrieval Effectiveness to Clustered and Filtered Document through Query Expansion (질의어 확장에 기반을 둔 클러스터링 및 필터링 문서의 검색효율 제고에 관한 연구)

  • 노동조
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.14 no.1
    • /
    • pp.219-230
    • /
    • 2003
  • The purpose of this study is to improve of retrieval effectiveness to clustered and filtered document through query expansion. The result of this research prove that extended queries and documents, information in encyclopedia, clustering and filtering techniques are effective to promote retrieval effectiveness.

  • PDF