• Title/Summary/Keyword: 문서 분류기

Search Result 192, Processing Time 0.03 seconds

Text Document Categorization using FP-Tree (FP-Tree를 이용한 문서 분류 방법)

  • Park, Yong-Ki;Kim, Hwang-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.11
    • /
    • pp.984-990
    • /
    • 2007
  • As the amount of electronic documents increases explosively, automatic text categorization methods are needed to identify those of interest. Most methods use machine learning techniques based on a word set. This paper introduces a new method, called FPTC (FP-Tree based Text Classifier). FP-Tree is a data structure used in data-mining. In this paper, a method of storing text sentence patterns in the FP-Tree structure and classifying text using the patterns is presented. In the experiments conducted, we use our algorithm with a #Mutual Information and Entropy# approach to improve performance. We also present an analysis of the algorithm via an ordinary differential categorization method.

Automatic Document Classification Based on k-NN Classifier and Object-Based Thesaurus (k-NN 분류 알고리즘과 객체 기반 시소러스를 이용한 자동 문서 분류)

  • Bang Sun-Iee;Yang Jae-Dong;Yang Hyung-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1204-1217
    • /
    • 2004
  • Numerous statistical and machine learning techniques have been studied for automatic text classification. However, because they train the classifiers using only feature vectors of documents, ambiguity between two possible categories significantly degrades precision of classification. To remedy the drawback, we propose a new method which incorporates relationship information of categories into extant classifiers. In this paper, we first perform the document classification using the k-NN classifier which is generally known for relatively good performance in spite of its simplicity. We employ the relationship information from an object-based thesaurus to reduce the ambiguity. By referencing various relationships in the thesaurus corresponding to the structured categories, the precision of k-NN classification is drastically improved, removing the ambiguity. Experiment result shows that this method achieves the precision up to 13.86% over the k-NN classification, preserving its recall.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

(A Question Type Classifier based on a Support Vector Machine for a Korean Question-Answering System) (한국어 질의응답시스템을 위한 지지 벡터기계 기반의 질의유형분류기)

  • 김학수;안영훈;서정연
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.466-475
    • /
    • 2003
  • To build an efficient Question-Answering (QA) system, a question type classifier is needed. It can classify user's queries into predefined categories regardless of the surface form of a question. In this paper, we propose a question type classifier using a Support Vector Machine (SVM). The question type classifier first extracts features like lexical forms, part of speech and semantic markers from a user's question. The system uses $X^2$ statistic to select important features. Selected features are represented as a vector. Finally, a SVM categorizes questions into predefined categories according to the extracted features. In the experiment, the proposed system accomplished 86.4% accuracy The system precisely classifies question type without using any rules like lexico-syntactic patterns. Therefore, the system is robust and easily portable to other domains.

Comparison Between Optimal Features of Korean and Chinese for Text Classification (한중 자동 문서분류를 위한 최적 자질어 비교)

  • Ren, Mei-Ying;Kang, Sinjae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.4
    • /
    • pp.386-391
    • /
    • 2015
  • This paper proposed the optimal attributes for text classification based on Korean and Chinese linguistic features. The experiments committed to discover which is the best feature among n-grams which is known as language independent, morphemes that have language dependency and some other feature sets consisted with n-grams and morphemes showed best results. This paper used SVM classifier and Internet news for text classification. As a result, bi-gram was the best feature in Korean text categorization with the highest F1-Measure of 87.07%, and for Chinese document classification, 'uni-gram+noun+verb+adjective+idiom', which is the combined feature set, showed the best performance with the highest F1-Measure of 82.79%.

Implementation of Web-based Document Classification System using Naïve Classifier (Naïve 분류기를 이용한 웹 기반 문서 분류기 구현)

  • Park, Jea-Hyun;Choi, Kwang-Bok;Han, Ju-Hyun;Choi, Won-Jong;Yang, Jaeyoung;Choi, Joongmin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.343-346
    • /
    • 2004
  • 베이지안 확률 모형은 문서 분류에서 널리 사용되는 이론이다. 그러나, 실제로 베이지안 이론에 기초하여 만들어진 시스템은 처리 시간이 많이 소요된다는 단점을 가지고 있다. 이 논문에서는 문서 분류 작업에 있어 기존의 베이지안 모형을 구현함과 동시에 여러 가지 방법을 통해 시간적인 측면을 개선한 시스템을 구현하였다.

  • PDF

A Hyperlink-based Feature Weighting Technique for Web Document Classification (웹문서 자동 분류를 위한 하이퍼링크 기반 특징 가중치 부여 기법)

  • Lee, A-Ram;Kim, Han-Joon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.417-420
    • /
    • 2012
  • 기계학습을 이용하는 문서 자동분류 시스템은 분류모델의 구성을 위해서 단어를 특징으로 사용한다. 자동분류 시스템의 성능을 높이기 위해 보다 의미있는 특징을 선택하여 분류모델을 구성하기 위한 여러 연구가 진행되고 있다. 특히 인터넷상에서 사용되는 웹문서는 단어 외에도 태그정보, 링크정보를 가지고 있다. 본 논문에서는 이 두 가지 정보를 이용하여 웹문서 자동분류 시스템의 성능을 향상 시키는 방법 제안 한다. 태그 정보와 링크 정보를 이용하여 적절한 특징을 선택하고, 각 특징의 중요도를 계산하여 가중치를 구한다. 계산된 가중치를 각 특징에 부여하여 분류 모델을 구성하고 나이브 베이지안 분류기를 통하여 성능을 평가하였다

Improving Time Efficiency of kNN Classifier Using Keywords (대표용어를 이용한 kNN 분류기의 처리속도 개선)

  • 이재윤;유수현
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2003.08a
    • /
    • pp.65-72
    • /
    • 2003
  • kNN 기법은 높은 자동분류 성능을 보여주지만 처리 속도가 느리다는 단점이 있다. 이를 극복하기 위해 입력문서의 대표용어 w개를 선정하고 이를 포함한 학습문서만으로 학습집단을 축소함으로써 자동분류 속도를 향상시키는 kw_kNN을 제안하였다. 실험 결과 대표 용어를 5개 사용할 경우에는 kNN 대비 문서간 비교횟수를 평균 18.4%로 축소할 수 있었다. 그러면서도 성능저하를 최소화하여 매크로 평균 F1 척도면에서는 차이가 없고 마이크로 평균정확률 면에서는 약 l∼2% 포인트 이내로 kNN 기법의 성능에 근접한 결과를 얻었다.

  • PDF

BPNN Algorithm with SVD Technique for Korean Document categorization (한글문서분류에 SVD를 이용한 BPNN 알고리즘)

  • Li, Chenghua;Byun, Dong-Ryul;Park, Soon-Choel
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.2
    • /
    • pp.49-57
    • /
    • 2010
  • This paper proposes a Korean document. categorization algorithm using Back Propagation Neural Network(BPNN) with Singular Value Decomposition(SVD). BPNN makes a network through its learning process and classifies documents using the network. The main difficulty in the application of BPNN to document categorization is high dimensionality of the feature space of the input documents. SVD projects the original high dimensional vector into low dimensional vector, makes the important associative relationship between terms and constructs the semantic vector space. The categorization algorithm is tested and compared on HKIB-20000/HKIB-40075 Korean Text Categorization Test Collections. Experimental results show that BPNN algorithm with SVD achieves high effectiveness for Korean document categorization.

Korean Morphological Analysis Considering a Term with Multiple Parts of Speech ("의미적 한 단어" 유형 분석 및 형태소 분석 기법)

  • Hur, Yun-Young;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.128-131
    • /
    • 1994
  • 한국어 문서중 신문이나 시사지, 법률관련문서, 경제학관련문서, 국문학관련문서와 같은 전문분야 문서에는 한글, 한자, 영어, 문장부호와 같은 기호들의 결합으로 이루어지면서 하나의 뜻으로 나타내는 "의미적 한 단어"가 많이 존재한다. 이러한 단어들은 이를 고려하지 못한 형태소 분석기의 분석률을 감소시키고, 오분석율을 증가시킨다. 본 논문은 "의미적 한 단어"의 유형과 분석과정에 따른 유형을 분류하였으며 그에 적합한 형태소 분석기법을 제시하였다. 유형 분류과 제사된 형태소 분석기법으로 구현된 형태소 분석기는 기존의 형태소 분석기보다 분석률이 증가되었으며 오분석률은 감소되었다.

  • PDF