• Title/Summary/Keyword: Training Document

Search Result 173, Processing Time 0.029 seconds

A Study on Document Filtering Using Naive Bayesian Classifier (베이지안 분류기를 이용한 문서 필터링)

  • Lim Soo-Yeon;Son Ki-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.3
    • /
    • pp.227-235
    • /
    • 2005
  • Document filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. In this paper, we treat document filtering problem as binary document classification problem and we proposed the News Filtering system based on the Bayesian Classifier. For we perform filtering, we make an experiment to find out how many training documents, and how accurate relevance checks are needed.

  • PDF

Query-Based Summarization using Semantic Feature Matrix and Semantic Variable Matrix (의미 특징 행렬과 의미 가변행렬을 이용한 질의 기반의 문서 요약)

  • Park, Sun
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.4
    • /
    • pp.372-377
    • /
    • 2008
  • This paper proposes a new query-based document summarization method using the semantic feature matrix and the semantic variable matrix. The proposed method doesn't need the training phase using training data comprising queries and query specific documents. And it exactly summarizes documents for the given query by using semantic features and semantic variables that is better at identifying sub-topics of document. Because the NMF have a great power to naturally extract semantic features representing the inherent structure of a document. The experimental results show that the proposed method achieves better performance than other methods.

  • PDF

Noise Removal using Support Vector Regression in Noisy Document Images

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jai-Hyun;Ha, Hyun-Ho;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.4
    • /
    • pp.669-680
    • /
    • 2012
  • Noise removal of document images is a necessary step during preprocessing to recognize characters effectively because it has influences greatly on processing speed and performance for character recognition. We have considered using the spatial filters such as traditional mean filters and Gaussian filters, and wavelet transformed based methods for noise deduction in natural images. However, these methods are not effective for the noise removal of document images. In this paper, we present noise removal of document images using support vector regression. The proposed approach consists of two steps which are SVR training step and SVR test step. We construct an optimal prediction model using grid search with cross-validation in SVR training step, and then apply it to noisy images to remove noises in test step. We evaluate our SVR based method both quantitatively and qualitatively for noise removal in Korean, English and Chinese character documents, and compare it to some existing methods. Experimental results indicate that the proposed method is more effective and can get satisfactory removal results.

Document Imaging & Conversion And the Application of Standards

  • Gamble, Troy A.
    • Proceedings of the CALSEC Conference
    • /
    • 1998.10a
    • /
    • pp.141-148
    • /
    • 1998
  • ㆍ Design Creation is More Efficient ㆍ Large Shared Body of Knowledge Created and Maintained...Reducing Training Time, Re-Training, Costs and Design Efforts. ㆍ Electronic Data Easier to Exchange and Integrate. An Overall Increase in Efficiency, Savings and Competitiveness Will be Achieved(omitted)

  • PDF

Document Flow for the Research Reactor Project in ANSIM Document Control System (ANSIM 문서관리시스템에서 연구로사업 문서흐름)

  • Park, Kook-Nam;Kim, Kwon-Ho;Kim, Jun-Yeon;Wu, Sang-Ik;Oh, Soo-Youl
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.36 no.4
    • /
    • pp.18-24
    • /
    • 2013
  • A document control system (DCS), ANSIM (KAERI Advanced Nuclear Safety Information Management) was designed for the purpose of documents preparation, review, and approvement for JRTR (Jordan Research and Training Reactor) project. The ANSIM system consists of a document management, document container, project management, organization management, and EPC (Engineering, Procurement and Construction) document folder. The document container folder run after specific contents, a revision history of the design documents and drawings are issued in KAERI. The EPC document work-scope is a registry for incoming documents in ANSIM, the assignment of a manager or charger, document review, preparing and outgoing PM memorandum as attached the reviewed paper. On the other hand, KAERI is aiming another extra network server for the NRR (New Research Reactor) by the end of this year. In conclusion, it is the first, computation system of DCS that provides document form, document number, and approval line. Second, ANSIM increases the productivity of performance that can be recognized the document work-flow of oneself and all participants. Finally, a plenty of experience and knowledge of nuclear technology can be transmitted to next generation for the design, manufacturing, testing, installation, and commissioning. Though this, ANSIM is expected to allow the export of a knowledge and information system as well as a research reactor.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

An Improvement Of Efficiency For kNN By Using A Heuristic (휴리스틱을 이용한 kNN의 효율성 개선)

  • Lee, Jae-Moon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.719-724
    • /
    • 2003
  • This paper proposed a heuristic to enhance the speed of kNN without loss of its accuracy. The proposed heuristic minimizes the computation of the similarity between two documents which is the dominant factor in kNN. To do this, the paper proposes a method to calculate the upper limit of the similarity and to sort the training documents. The proposed heuristic was implemented on the existing framework of the text categorization, so called, AI :: Categorizer and it was compared with the conventional kNN with the well-known data, Router-21578. The comparisons show that the proposed heuristic outperforms kNN about 30∼40% with respect to the execution time.

Text Classification with Heterogeneous Data Using Multiple Self-Training Classifiers

  • William Xiu Shun Wong;Donghoon Lee;Namgyu Kim
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.789-816
    • /
    • 2019
  • Text classification is a challenging task, especially when dealing with a huge amount of text data. The performance of a classification model can be varied depending on what type of words contained in the document corpus and what type of features generated for classification. Aside from proposing a new modified version of the existing algorithm or creating a new algorithm, we attempt to modify the use of data. The classifier performance is usually affected by the quality of learning data as the classifier is built based on these training data. We assume that the data from different domains might have different characteristics of noise, which can be utilized in the process of learning the classifier. Therefore, we attempt to enhance the robustness of the classifier by injecting the heterogeneous data artificially into the learning process in order to improve the classification accuracy. Semi-supervised approach was applied for utilizing the heterogeneous data in the process of learning the document classifier. However, the performance of document classifier might be degraded by the unlabeled data. Therefore, we further proposed an algorithm to extract only the documents that contribute to the accuracy improvement of the classifier.

Design of Automatic Document Classifier for IT documents based on SVM (SVM을 이용한 디렉토리 기반 기술정보 문서 자동 분류시스템 설계)

  • Kang, Yun-Hee;Park, Young-B.
    • Journal of IKEEE
    • /
    • v.8 no.2 s.15
    • /
    • pp.186-194
    • /
    • 2004
  • Due to the exponential growth of information on the internet, it is getting difficult to find and organize relevant informations. To reduce heavy overload of accesses to information, automatic text classification for handling enormous documents is necessary. In this paper, we describe structure and implementation of a document classification system for web documents. We utilize SVM for documentation classification model that is constructed based on training set and its representative terms in a directory. In our system, SVM is trained and is used for document classification by using word set that is extracted from information and communication related web documents. In addition, we use vector-space model in order to represent characteristics based on TFiDF and training data consists of positive and negative classes that are represented by using characteristic set with weight. Experiments show the results of categorization and the correlation of vector length.

  • PDF

A Study of Research on Methods of Automated Biomedical Document Classification using Topic Modeling and Deep Learning (토픽모델링과 딥 러닝을 활용한 생의학 문헌 자동 분류 기법 연구)

  • Yuk, JeeHee;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.2
    • /
    • pp.63-88
    • /
    • 2018
  • This research evaluated differences of classification performance for feature selection methods using LDA topic model and Doc2Vec which is based on word embedding using deep learning, feature corpus sizes and classification algorithms. In addition to find the feature corpus with high performance of classification, an experiment was conducted using feature corpus was composed differently according to the location of the document and by adjusting the size of the feature corpus. Conclusionally, in the experiments using deep learning evaluate training frequency and specifically considered information for context inference. This study constructed biomedical document dataset, Disease-35083 which consisted biomedical scholarly documents provided by PMC and categorized by the disease category. Throughout the study this research verifies which type and size of feature corpus produces the highest performance and, also suggests some feature corpus which carry an extensibility to specific feature by displaying efficiency during the training time. Additionally, this research compares the differences between deep learning and existing method and suggests an appropriate method by classification environment.