• Title/Summary/Keyword: document classification

Search Result 449, Processing Time 0.025 seconds

A Tensor Space Model based Deep Neural Network for Automated Text Classification (자동문서분류를 위한 텐서공간모델 기반 심층 신경망)

  • Lim, Pu-reum;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.3-13
    • /
    • 2018
  • Text classification is one of the text mining technologies that classifies a given textual document into its appropriate categories and is used in various fields such as spam email detection, news classification, question answering, emotional analysis, and chat bot. In general, the text classification system utilizes machine learning algorithms, and among a number of algorithms, naïve Bayes and support vector machine, which are suitable for text data, are known to have reasonable performance. Recently, with the development of deep learning technology, several researches on applying deep neural networks such as recurrent neural networks (RNN) and convolutional neural networks (CNN) have been introduced to improve the performance of text classification system. However, the current text classification techniques have not yet reached the perfect level of text classification. This paper focuses on the fact that the text data is expressed as a vector only with the word dimensions, which impairs the semantic information inherent in the text, and proposes a neural network architecture based upon the semantic tensor space model.

Improving Classification Accuracy in Hierarchical Trees via Greedy Node Expansion

  • Byungjin Lim;Jong Wook Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.113-120
    • /
    • 2024
  • With the advancement of information and communication technology, we can easily generate various forms of data in our daily lives. To efficiently manage such a large amount of data, systematic classification into categories is essential. For effective search and navigation, data is organized into a tree-like hierarchical structure known as a category tree, which is commonly seen in news websites and Wikipedia. As a result, various techniques have been proposed to classify large volumes of documents into the terminal nodes of category trees. However, document classification methods using category trees face a problem: as the height of the tree increases, the number of terminal nodes multiplies exponentially, which increases the probability of misclassification and ultimately leads to a reduction in classification accuracy. Therefore, in this paper, we propose a new node expansion-based classification algorithm that satisfies the classification accuracy required by the application, while enabling detailed categorization. The proposed method uses a greedy approach to prioritize the expansion of nodes with high classification accuracy, thereby maximizing the overall classification accuracy of the category tree. Experimental results on real data show that the proposed technique provides improved performance over naive methods.

Attribute-Based Classification Method for Automatic Construction of Answer Set (정답문서집합 자동 구축을 위한 속성 기반 분류 방법)

  • 오효정;장문수;장명길
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.764-772
    • /
    • 2003
  • The main thrust of our talk will be based on our experience in developing and applying an attribute-based classification technique in the context of an operational answer set driven retrieval system. To alleviate the difficulty and reduce the cost of manually constructing and maintaining answer sets, i.e., knowledge base, we have devised a new method of automating the answer document selection process by using the notion of attribute-based classification, which is in and of itself novel. We attempt to explain through experiments how helpful the proposed method is for the knowledge base construction process.

Comments Classification System using Topic Signature (Topic Signature를 이용한 댓글 분류 시스템)

  • Bae, Min-Young;Cha, Jeong-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.774-779
    • /
    • 2008
  • In this work, we describe comments classification system using topic signature. Topic signature is widely used for selecting feature in document classification and summarization. Comments are short and have so many word spacing errors, special characters. We firstly convert comments into 7-gram. We consider the 7-gram as sentence. We convert the 7-gram into 3-gram. We consider the 3-gram as word. We select key feature using topic signature and classify new inputs by the Naive Bayesian method. From the result of experiments, we can see that the proposed method is outstanding over the previous methods.

Evaluation of User Profile Construction Method by Fuzzy Inference

  • Kim, Byeong-Man;Rho, Sun-Ok;Oh, Sang-Yeop;Lee, Hyun-Ah;Kim, Jong-Wan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.175-184
    • /
    • 2008
  • To construct user profiles automatically, an extraction method for representative keywords from a set of documents is needed. In our previous works, we suggested such a method and showed its usefulness. Here, we apply it to the classification problem and observe how much it contributes to performance improvement. The method can be used as a linear document classifier with few modifications. So, we first evaluate its performance for that case. The method is also applicable to some non-linear classification methods such as GIS (Generalized Instance Set). In GIS algorithm, generalized instances are built from training documents by a generalization function and then the K-NN algorithm is applied to them, where the method can be used as a generalization function. For comparative works, two famous linear classification methods, Rocchio and Widrow-Hoff algorithms, are also used. Experimental results show that our method is better than the others for the case that only positive documents are considered, but not when negative documents are considered together.

Composition of volatile organic components on ballpoint pen inks by HS-SPME GC/MS (HS-SPME GC/MS를 이용한 볼펜잉크의 휘발성 성분 분석)

  • Choi, Mi-Jung;Kim, Chang-Seong;Sun, Yale-Shik;Park, Sung-Woo
    • Analytical Science and Technology
    • /
    • v.23 no.4
    • /
    • pp.414-422
    • /
    • 2010
  • In forensic examinations of question document, analysis about inks components and the dating of ink entries is often of considerable importance and forensic examination of inks is principally concerned with the classification and comparison of chemically complex mixtures. The authenticity about inks analysis of a questioned document may be examined through the analysis of inks used to TLC, HPLC/MS, GC/MS, LDI/MS. We collected 56 difference types of black ballpoint pen inks manufactured from 5 country groups. We identified major 6 species volatile organic components (VOCs), ethylbenzene ($0.089-0.244\;{\mu}g$/mL), o-xylene ($0.072-0.331\;{\mu}g$/mL), m,p-xylene ($0.062-0.318\;{\mu}g$/mL), benzene ($0.003-0.173\;{\mu}g$/mL), 1,1-dichloroethylene ($0.003-0.295\;{\mu}g$/mL), toluene ($0.007-0.484\;{\mu}g$/mL) using HS-SPME GC/MS. The results of this study indicated that determined VOCs of black ballpoint pen inks could make a discriminating tool of inks analysis for forensic question document and can supply methodology for classification and identification of between ballpoints pen inks.

The Classification and filing of the Official Documents of the Office of Crown Properties in the Great Han Empire (대한제국기 내장원의 공문서 편철과 분류)

  • Park, Sung-Joon
    • The Korean Journal of Archival Studies
    • /
    • no.28
    • /
    • pp.3-33
    • /
    • 2011
  • The Office of Crown Properties was established to manage the property of royal properties as an institution belonging to the Department of the Royal Household in April, 1895. However, as the Great Han Empire established and various policies enforcing the power of the emperor became introduced, the Office of Crown Properties came to be expanded to a large financial agency that would be in charge of various financial sources such as Public Land and Maritime Tax. As the Office of Crown Properties came to manage various income sources, it classified the documents dealing with various government agencies in the Capital and other countryside regions by the unit of Section. The Office of Crown Properties classified the documents by Section and filed them according to Sending/Receiving subject. Sometimes, it filed one kind of document only but sometimes many different kinds of documents were filed together. The types of the document can show the characteristics of the document and the hierarchy of the related agencies through the document name. The fact that they filed the documents with different grades in one file shows that the hierarchy of the agency they dealt with was not the primary standard of the filing and that they did not file the documents by its type. The Office of Crown Properties did not file the related documents in the same file, either. We can say the documents are related if they were corresponded with other agencies while they dealt with a specific item. However, they did not file the related documents in the same file but distinguished sending documents from receiving documents. The reason why they filed different kind documents in the same file and separated the related documents in different file was they took 'whether they were sent or received' as the primary filing standard. They separated the sending documents from the receiving documents first and then filed them according to time regardless of the region or institution. The Office of Crown Properties primarily classified the documents by Section, classified the documents with the standard of whether they were receiving documents or sending documents and then filed them in a file according to the time. It means that the Office of Crown Properties came to create the Official Document Classification and filing system.

Gathering Common-word and Document Reclassification to improve Accuracy of Document Clustering (문서 군집화의 정확률 향상을 위한 범용어 수집과 문서 재분류 알고리즘)

  • Shin, Joon-Choul;Ock, Cheol-Young;Lee, Eung-Bong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.53-62
    • /
    • 2012
  • Clustering technology is used to deal efficiently with many searched documents in information retrieval system. But the accuracy of the clustering is satisfied to the requirement of only some domains. This paper proposes two methods to increase accuracy of the clustering. We define a common-word, that is frequently used but has low weight during clustering. We propose the method that automatically gathers the common-word and calculates its weight from the searched documents. From the experiments, the clustering error rates using the common-word is reduced to 34% compared with clustering using a stop-word. After generating first clusters using average link clustering from the searched documents, we propose the algorithm that reevaluates the similarity between document and clusters and reclassifies the document into more similar clusters. From the experiments using Naver JiSikIn category, the accuracy of reclassified clusters is increased to 1.81% compared with first clusters without reclassification.

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.