• Title/Summary/Keyword: Document Filtering

Search Result 96, Processing Time 0.038 seconds

Efficient Web Document Search based on Users' Understanding Levels (사용자의 이해수준에 따른 효율적인 웹문서 검색)

  • Shim, Sang-Hee;Lee, Soo-Jung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.38-46
    • /
    • 2009
  • With the rapid increase in the number of Web documents, the problem of information overload is growing more serious in Internet search. In order to ease the problem, researchers are paying attention to personalization, which creates Web environment fittingly for users' preference, but most of search engines produce results focused on users' queries. Thus, the present study examined the method of producing search results personalized based on a user's understanding level. A characteristic that differentiates this study from previous researches is that it considers users' understanding level and searches documents of difficulty fit for the level first. The difficulty level of a document is adjusted based on the understanding level of users who access the document, and a user's understanding level is updated periodically based on the difficulty of documents accessed by the user. A Web search system based on the results of this study is expected to bring very useful results to Web users of various age groups.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

The Extraction of Table Lines and Data in Document Image (문서영상에서 표 구성 직선과 데이터 추출)

  • Jang, Dae-Geun;Kim, Eui-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.3
    • /
    • pp.556-563
    • /
    • 2006
  • We should extract lines and data which consist of the table in order to classify the table region and analyze its structure in document image. But it is difficult to extract lines and data exactly because the lines are cut and their lengths are changed, or characters or noises are merged to the table lines. These problems result from the error of image input device or image reduction. In this paper, we propose the better method of extracting lines and data for table region classification and structure analysis than the previous ones including commercial softwares. The prposed method extracts horizontal and vertical lines which consist of the table by the use of one dimensional median filter. This filter not only eliminates the noises which attach to the line and the lines which are orthogonal to the filtering direction, but also connects the cut line of which the gap is shorter than the length of the filter tap in the process of extracting lines to the filtering direction. Furthermore, texts attached to the line are separated in the process of extracting vertical lines. This is an example of ABSTRACT format.

Comparing Korean Spam Document Classification Using Document Classification Algorithms (문서 분류 알고리즘을 이용한 한국어 스팸 문서 분류 성능 비교)

  • Song, Chull-Hwan;Yoo, Seong-Joon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10c
    • /
    • pp.222-225
    • /
    • 2006
  • 한국은 다른 나라에 비해 많은 인터넷 사용자를 가지고 있다. 이에 비례해서 한국의 인터넷 유저들은 Spam Mail에 대해 많은 불편함을 호소하고 있다. 이러한 문제를 해결하기 위해 본 논문은 다양한 Feature Weighting, Feature Selection 그리고 문서 분류 알고리즘들을 이용한 한국어 스팸 문서 Filtering연구에 대해 기술한다. 그리고 한국어 문서(Spam/Non-Spam 문서)로부터 영사를 추출하고 이를 각 분류 알고리즘의 Input Feature로써 이용한다. 그리고 우리는 Feature weighting 에 대해 기존의 전통적인 방법이 아니라 각 Feature에 대해 Variance 값을 구하고 Global Feature를 선택하기 위해 Max Value Selection 방법에 적용 후에 전통적인 Feature Selection 방법인 MI, IG, CHI 들을 적용하여 Feature들을 추출한다. 이렇게 추출된 Feature들을 Naive Bayes, Support Vector Machine과 같은 분류 알고리즘에 적용한다. Vector Space Model의 경우에는 전통적인 방법 그대로 사용한다. 그 결과 우리는 Support Vector Machine Classifier, TF-IDF Variance Weighting(Combined Max Value Selection), CHI Feature Selection 방법을 사용할 경우 Recall(99.4%), Precision(97.4%), F-Measure(98.39%)의 성능을 보였다.

  • PDF

Reputation Analysis of Document Using Probabilistic Latent Semantic Analysis Based on Weighting Distinctions (가중치 기반 PLSA를 이용한 문서 평가 분석)

  • Cho, Shi-Won;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.632-638
    • /
    • 2009
  • Probabilistic Latent Semantic Analysis has many applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. In this paper, we propose an algorithm using weighted Probabilistic Latent Semantic Analysis Model to find the contextual phrases and opinions from documents. The traditional keyword search is unable to find the semantic relations of phrases, Overcoming these obstacles requires the development of techniques for automatically classifying semantic relations of phrases. Through experiments, we show that the proposed algorithm works well to discover semantic relations of phrases and presents the semantic relations of phrases to the vector-space model. The proposed algorithm is able to perform a variety of analyses, including such as document classification, online reputation, and collaborative recommendation.

Fingerprint region and table segmentation in fingerprint document (지문원지의 영역분할 및 도표 인식)

  • 정윤주;이영화;이준재;심재창
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.552-555
    • /
    • 1999
  • In this paper, a method for extracting the fingerprint regions and the table from fingerprint document which is the size of A4 including ten fingerprints images in a table is presented. The extraction of each fingerprint region is carried out by segmenting the foreground fingerprint region using a block filtering method and detecting its center point. The table extraction, by detecting a horizontal line using line tracing, and detecting a vertical line by its orthogonal equation. Here, T-shaped mask is proposed for finding the starting points of the vertical line intersecting horizontal line by the form of 'T'. Experimental results show above 95% correct rate of extracting the fingerprint region and table.

  • PDF

XML Information Retrieval by Document Filtering and Query Expansion Based on Ontology (온톨로지 기반 문서여과 및 질의확장에 의한 XML 정보검색)

  • Kim Myung Sook;Kong Yong-Hae
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.5
    • /
    • pp.596-605
    • /
    • 2005
  • Conventional XML query methods such as simple keyword match or structural query expansion are not sufficient to catch the underlying information in the documents. Moreover, these methods inefficiently try to query all the documents. This paper proposes document tittering and query expansion methods that are based on ontology. Using ontology, we construct a universal DTD that can filter off unnecessary documents. Then, query expansion method is developed through the analysis of concept hierarchy and association among concepts. The proposed methods are applied on variety of sample XML documents to test the effectiveness.

  • PDF

Text filtering by Boosting Linear Perceptrons

  • O, Jang-Min;Zhang, Byoung-Tak
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.374-378
    • /
    • 2000
  • in information retrieval, lack of positive examples is a main cause of poor performance. In this case most learning algorithms may not characteristics in the data to low recall. To solve the problem of unbalanced data, we propose a boosting method that uses linear perceptrons as weak learnrs. The perceptrons are trained on local data sets. The proposed algorithm is applied to text filtering problem for which only a small portion of positive examples is available. In the experiment on category crude of the Reuters-21578 document set, the boosting method achieved the recall of 80.8%, which is 37.2% improvement over multilayer with comparable precision.

  • PDF

The Region Analysis of Document Images Based on One Dimensional Median Filter (1차원 메디안 필터 기반 문서영상 영역해석)

  • 박승호;장대근;황찬식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.3
    • /
    • pp.194-202
    • /
    • 2003
  • To convert printed images into electronic ones automatically, it requires region analysis of document images and character recognition. In these, regional analysis segments document image into detailed regions and classifies thee regions into the types of text, picture, table and so on. But it is difficult to classify the text and the picture exactly, because the size, density and complexity of pixel distribution of some of these are similar. Thu, misclassification in region analysis is the main reason that makes automatic conversion difficult. In this paper, we propose region analysis method that segments document image into text and picture regions. The proposed method solves the referred problems using one dimensional median filter based method in text and picture classification. And the misclassification problems of boldface texts and picture regions like graphs or tables, caused by using median filtering, are solved by using of skin peeling filter and maximal text length. The performance, therefore, is better than previous methods containing commercial softwares.

Knowledge-Based Web Document Filtering (지식기반 웹 문서 필터링)

  • 황상규;김상모;변영태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.51-53
    • /
    • 1999
  • 인터넷에서 검색 가능한 정보의 양은 폭발적으로 증가하고 있으며, 그에 따라 웹 기반 정보검색시스템은 사용자가 원하는 정보만을 필터링하여 이용자의 정보검색 수행과정에 부담을 덜어줄 필요가 있다. 본 연구에서는 웹 정보검색에 익숙치 못한 초보 이용자들이 실제 웹 정보검색을 수행하는데 있어 발생할 수 있는 문제점을 살펴보고, 초보 이용자들의 보다 편리한 웹 정보검색을 도와줄 수 있도록 하기 위하여 WordNet을 활용한 지식베이스와 SDCC(Semantic Distance for Common Category)를 이용한 웹 문서 필터링 알고리즘을 개발하고 그 효율성을 확인하였다.

  • PDF