• Title/Summary/Keyword: Document-Classification

Search Result 448, Processing Time 0.03 seconds

Application of Text-Classification Based Machine Learning in Predicting Psychiatric Diagnosis (텍스트 분류 기반 기계학습의 정신과 진단 예측 적용)

  • Pak, Doohyun;Hwang, Mingyu;Lee, Minji;Woo, Sung-Il;Hahn, Sang-Woo;Lee, Yeon Jung;Hwang, Jaeuk
    • Korean Journal of Biological Psychiatry
    • /
    • v.27 no.1
    • /
    • pp.18-26
    • /
    • 2020
  • Objectives The aim was to find effective vectorization and classification models to predict a psychiatric diagnosis from text-based medical records. Methods Electronic medical records (n = 494) of present illness were collected retrospectively in inpatient admission notes with three diagnoses of major depressive disorder, type 1 bipolar disorder, and schizophrenia. Data were split into 400 training data and 94 independent validation data. Data were vectorized by two different models such as term frequency-inverse document frequency (TF-IDF) and Doc2vec. Machine learning models for classification including stochastic gradient descent, logistic regression, support vector classification, and deep learning (DL) were applied to predict three psychiatric diagnoses. Five-fold cross-validation was used to find an effective model. Metrics such as accuracy, precision, recall, and F1-score were measured for comparison between the models. Results Five-fold cross-validation in training data showed DL model with Doc2vec was the most effective model to predict the diagnosis (accuracy = 0.87, F1-score = 0.87). However, these metrics have been reduced in independent test data set with final working DL models (accuracy = 0.79, F1-score = 0.79), while the model of logistic regression and support vector machine with Doc2vec showed slightly better performance (accuracy = 0.80, F1-score = 0.80) than the DL models with Doc2vec and others with TF-IDF. Conclusions The current results suggest that the vectorization may have more impact on the performance of classification than the machine learning model. However, data set had a number of limitations including small sample size, imbalance among the category, and its generalizability. With this regard, the need for research with multi-sites and large samples is suggested to improve the machine learning models.

A Classification Model for Illegal Debt Collection Using Rule and Machine Learning Based Methods

  • Kim, Tae-Ho;Lim, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.93-103
    • /
    • 2021
  • Despite the efforts of financial authorities in conducting the direct management and supervision of collection agents and bond-collecting guideline, the illegal and unfair collection of debts still exist. To effectively prevent such illegal and unfair debt collection activities, we need a method for strengthening the monitoring of illegal collection activities even with little manpower using technologies such as unstructured data machine learning. In this study, we propose a classification model for illegal debt collection that combine machine learning such as Support Vector Machine (SVM) with a rule-based technique that obtains the collection transcript of loan companies and converts them into text data to identify illegal activities. Moreover, the study also compares how accurate identification was made in accordance with the machine learning algorithm. The study shows that a case of using the combination of the rule-based illegal rules and machine learning for classification has higher accuracy than the classification model of the previous study that applied only machine learning. This study is the first attempt to classify illegalities by combining rule-based illegal detection rules with machine learning. If further research will be conducted to improve the model's completeness, it will greatly contribute in preventing consumer damage from illegal debt collection activities.

An Automatic Classification of Korean Documents Using Weight for Keywords of Document and Corpus : Bayesian classifier (문서의 주제어별 가중치와 말뭉치를 이용한 한국어 문서의 자동분류 : 베이지안 분류자)

  • 허준희;고수정;김태용;최준혁;이정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.154-156
    • /
    • 1999
  • 문서 분류는 미리 정의된 두 개 또는 그 이상의 클래스에 새로 생성되는 객체들을 할당하는 방법이다. 문서의 자동 분류에 대한 연구는 오래 전부터 연구되어 왔지만 한국어에 대한 적용 및 연구는 다른 분야에 비해 아직까지 활발히 이루어지지 않고 있다. 본 논문에서는 문서를 자동으로 분류하기 위해 문서의 주제어에 가중치를 부여하고, 부족한 문서의 특징을 보충하기 위하여 말뭉치로부터 주제어들과의 상호정보에 의해 추출된 단어를 사용하여 문서를 표현한 후, 가중치를 부여한 문서의 주제어에 베이지안 분류자를 사용하여 문서분류를 수행한다. 실험은 한국어 정보검색 실험용 데이터 집합인 KTset95 문서 4,414개 중 1,300개의 문서를 학습 집합으로, 1,000개의 문서를 분류에 대한 검증 집합으로 사용하였다. 실험 결과, 순수 베이지안 확률을 사용한 기존의 방법보다 실험 집합과 검증 집합에서 각각 1.92%, 4.3% 향상된 분류 정확도를 얻었다.

  • PDF

Variational Expectation-Maximization Algorithm in Posterior Distribution of a Latent Dirichlet Allocation Model for Research Topic Analysis

  • Kim, Jong Nam
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.883-890
    • /
    • 2020
  • In this paper, we propose a variational expectation-maximization algorithm that computes posterior probabilities from Latent Dirichlet Allocation (LDA) model. The algorithm approximates the intractable posterior distribution of a document term matrix generated from a corpus made up by 50 papers. It approximates the posterior by searching the local optima using lower bound of the true posterior distribution. Moreover, it maximizes the lower bound of the log-likelihood of the true posterior by minimizing the relative entropy of the prior and the posterior distribution known as KL-Divergence. The experimental results indicate that documents clustered to image classification and segmentation are correlated at 0.79 while those clustered to object detection and image segmentation are highly correlated at 0.96. The proposed variational inference algorithm performs efficiently and faster than Gibbs sampling at a computational time of 0.029s.

RSS Web Document Classifier for Educational Blogs (교육용 블로그를 위한 RSS 문서 분류기)

  • Lee, Young-Seok;Kim, Jun-Il;Cho, Jung-Won;Choi, Byung-Uk
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1125-1128
    • /
    • 2005
  • If you're tired of visiting site in search of the type of web documents that interests you, you can use an RSS (Really Simple Syndication) client to organize web content and deliver it to you in a manner that's much quicker and easier to access. This paper gives an overview of RSS technologies and implement a suitable RSS client for educational blogs. In addition to that, this paper propose a method for classification system in order to improve a RSS client.

  • PDF

Classification of Korean Documents Based on CNN Using Document Indexing Method based on Word Meaning and Order (단어의 의미와 순서를 고려하는 문서색인방법을 이용한 CNN 기반 한글문서분류)

  • Kim, Nam-Hun;Yang, Hyung-Jeong
    • Proceedings of The KACE
    • /
    • 2017.08a
    • /
    • pp.41-45
    • /
    • 2017
  • 본 논문에서는 컨볼루션 신경망 네트워크(CNN:Convolution Neural Network)을 기반으로 단어의 의미와 순서를 고려하는 문서 색인 방법을 이용하여 한글 문서 분류 방법을 제안한다. 먼저 문서를 형태소 분석하여 어절 단위로 분리 한 후, 불용어를 처리 하고, 문서의 단어 의미를 고려하는 문서 표현하고, 문서의 단어 순서까지 고려하여 CNN의 입력으로 사용하였다. 실험결과 CNN 분류기를 기반으로 본 논문에서 제안하는 문서 색인 방법은 TF-IDF를 이용하는 방법보다 4.2%, Word2vec만 단독으로 사용하는 것보다 1.4%의 성능 상승을 이루었다. 이러한 결과를 통해 본 논문에서 제안하는 방법이 문서범주화 데이터 셋에서 문서 분류 성능향상에 영향을 미친다는 것을 확인하였다.

  • PDF

Implementation of Document Classification Engine by Using Associative Knowledge (연상 지식을 이용한 문서 분류 엔진의 구현)

  • Jang Jung-Hyo;Son Ju-Sung;Lee Sang-Kon;Ahn Dong-Un
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.625-628
    • /
    • 2006
  • 인간은 문서 내용의 적절성을 파악하기 위해서는 문서 전체를 읽어 보아야 그 적절성 여부를 알 수 있다. 그러나 문서의 양이 많은 경우나 문서 내에 여러 화제가 산재되어 있으면 문서의 분야를 파악하기 위해 많은 시간과 노력이 필요하게 된다. 따라서 본 논문에서 제안하는 방법은 이러한 비용을 절감하기 위해 카테고리의 트리 정보와 문서의 내용에서 추출한 분야연상어를 지식사전으로 구축하고 이를 이용하는 분류기를 설계하여 수집과 분류에 소요되는 비용을 절감하는 자동 분류기를 구현하였다.

  • PDF

Classification and Resolution of Conflicts for Integration of Heterogeneous Information Based on XML Schema (XML Schema 기반 이질 정보 통합의 충돌 분류와 해결 방안)

  • 권석훈;이경하;이규철
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.3
    • /
    • pp.55-74
    • /
    • 2003
  • Due to the evolution of computer systems and the proliferation of Internet, numerous information resources have been constructed. The deluge of information makes the need to integrate information, which are distributed on the internet and are handled in heterogeneous systems. Recently, most of the XML -based information integration systems use XML DTD(Document Type Definition) for describing integrated global schema. However, DTD has some limitations in modeling local information resources such as datatypes. Although W3C's XML Schema is more flexible and powerful than XML DTD in specifying integrated global schema, it has more complex problems in resolving conflicts than using DTD. In this paper, we provide a taxonomy of conflict problems in integration information resources using XML Schema, and propose conflict resolution mechanism using XQuery.

  • PDF

The Study on the Effective Automatic Classification of Internet Document Using the Machine Learning (기계학습을 기반으로 한 인터넷 학술문서의 효과적 자동분류에 관한 연구)

  • 노영희
    • Journal of Korean Library and Information Science Society
    • /
    • v.32 no.3
    • /
    • pp.307-330
    • /
    • 2001
  • This study experimented the performance of categorization methods using the kNN classifier. Most sample based automatic text categorization techniques like the kNN classifier reduces the feature set of the training documents. We sought to find out which percentage reductions in the feature set would result in high performances. In addition, the kNN classifier has to find the k number of training documents most similar to the test documents in the training documents. We sought to verify the most appropriate k value through experiments.

  • PDF

Metadata and Meta-Information System for Hypermedia Documents

  • Woojong Suh;Lee, Heeseok
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.89-92
    • /
    • 1998
  • Recently, many organizations have attempted to construct hypermedia systems to expand their working areas to Internet-based virtual work places. For the effective management of the hypermedia application, it is important to develop a technique for managing hypermedia documents, hyperdocuments. This paper employs metadata as it has been conceived as a key approach in document management. Hence, this paper proposes a meta-information system based on metadata, HyDoMIS, for the purpose of hyperdocument manage-ment. This system contains a repository for hyper-documents, which is based on metadata schema and classification. HyDoMIS performs functions such as metadata management, searching and reporting.

  • PDF