• Title/Summary/Keyword: 문서 분류기

Search Result 191, Processing Time 0.035 seconds

An XML-QL to SQL Translator for Processing XML Data (XML 데이타 처리를 위한 XML-QL to SQL 번역기)

  • Jang, Gyeong-Ja;Lee, Gi-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • XML has been proposed as an international standard for organizing and exchanging a great diversity of the Web data. It is important to retrieve components of stored XML documents that are needed by a wide variety of applications. In this paper, we suggest a method to store XML documents and to retrieve an XML data. In other words, we suggest the method of retrieving XML data is using XML -QL. So we need to mapping XML-QL to SQL translator on top of an RDBMS. The contributions of this paper include, besides the detailed design and implementation of the translator, demonstration of feasibility of such a translator, and a comprehensive classification of XML queries and their mappings to SQL relational queries.

Comparison of Korean Classification Models' Korean Essay Score Range Prediction Performance (한국어 학습 모델별 한국어 쓰기 답안지 점수 구간 예측 성능 비교)

  • Cho, Heeryon;Im, Hyeonyeol;Yi, Yumi;Cha, Junwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.133-140
    • /
    • 2022
  • We investigate the performance of deep learning-based Korean language models on a task of predicting the score range of Korean essays written by foreign students. We construct a data set containing a total of 304 essays, which include essays discussing the criteria for choosing a job ('job'), conditions of a happy life ('happ'), relationship between money and happiness ('econ'), and definition of success ('succ'). These essays were labeled according to four letter grades (A, B, C, and D), and a total of eleven essay score range prediction experiments were conducted (i.e., five for predicting the score range of 'job' essays, five for predicting the score range of 'happiness' essays, and one for predicting the score range of mixed topic essays). Three deep learning-based Korean language models, KoBERT, KcBERT, and KR-BERT, were fine-tuned using various training data. Moreover, two traditional probabilistic machine learning classifiers, naive Bayes and logistic regression, were also evaluated. Experiment results show that deep learning-based Korean language models performed better than the two traditional classifiers, with KR-BERT performing the best with 55.83% overall average prediction accuracy. A close second was KcBERT (55.77%) followed by KoBERT (54.91%). The performances of naive Bayes and logistic regression classifiers were 52.52% and 50.28% respectively. Due to the scarcity of training data and the imbalance in class distribution, the overall prediction performance was not high for all classifiers. Moreover, the classifiers' vocabulary did not explicitly capture the error features that were helpful in correctly grading the Korean essay. By overcoming these two limitations, we expect the score range prediction performance to improve.

A Personalized Retrieval System Based on Classification and User Query (분류와 사용자 질의어 정보에 기반한 개인화 검색 시스템)

  • Kim, Kwang-Young;Shim, Kang-Seop;Kwak, Seung-Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.43 no.3
    • /
    • pp.163-180
    • /
    • 2009
  • In this paper, we describe a developmental system for establishing personal information tendency based on user queries. For each query, the system classified it based on the category information using a kNN classifier. As category information, we used DDC field which is already assigned to each record in the database. The system accumulates category information for all user queries and the user's personalized feature for the target database. We then developed a personalized retrieval system reflecting the personalized feature to produce search result. Our system re-ranks the result documents by adding more weights to the documents for which categories match with the user's personalized feature. By using user's tendency information, the ambiguity problem of the word could be solved. In this paper, we conducted experiments for personalized search and word sense disambiguation (WSD) on a collection of Korean journal articles of science and technology arena. Our experimental result and user's evaluation show that the performance of the personalized search system and WSD is proved to be useful for actual field services.

A Study on the Development of Search Algorithm for Identifying the Similar and Redundant Research (유사과제파악을 위한 검색 알고리즘의 개발에 관한 연구)

  • Park, Dong-Jin;Choi, Ki-Seok;Lee, Myung-Sun;Lee, Sang-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.54-62
    • /
    • 2009
  • To avoid the redundant investment on the project selection process, it is necessary to check whether the submitted research topics have been proposed or carried out at other institutions before. This is possible through the search engines adopted by the keyword matching algorithm which is based on boolean techniques in national-sized research results database. Even though the accuracy and speed of information retrieval have been improved, they still have fundamental limits caused by keyword matching. This paper examines implemented TFIDF-based algorithm, and shows an experiment in search engine to retrieve and give the order of priority for similar and redundant documents compared with research proposals, In addition to generic TFIDF algorithm, feature weighting and K-Nearest Neighbors classification methods are implemented in this algorithm. The documents are extracted from NDSL(National Digital Science Library) web directory service to test the algorithm.

Implementation of Audit Trail Service System for EDI Security (EDI 보안 감사 추적 서비스 시스템 구현)

  • Jeong, Gyeong-Ja;Kim, Gi-Jung;Seo, Gyeong-Ran;Ryu, Geun-Ho;Gang, Chang-Gu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.754-766
    • /
    • 1997
  • In this paper,we implement the Audit Trail Service Sydtem for the EDI Security.It has solved a law dispute between enterprises by informations that have generated by the EDI serice systrm.The audit trail service sys-tem implemented for EDI security satisfied the requirements of audit and the protocol of the security serive of X.435 and X.400.The EDI Security Audit System consists of the event discrimiator,the audit recirder,the audit archiver,and the provider of audit services .The event discriminator classified the reansmitted data from the EDI network ot audit sercices.The audit recorder constructs an index that has combined time information wiht audit unformations which are classified by the event discriminator.ZThe audit archiver performas the vacumming of added audit imformations by passing time by passing time.The audit provider is a module that carries out the audit trail servies by using stored audit informations. The audit provider suports audit servies,which are non-requdiation,proof and probe,controller of security,and accesing infrimation.The audit trail service system for EDI security constructs audit information by using index that is combining time imfromation,so it supports especially fast accesing audit information.

  • PDF

Recognition of Various Printed Hangul Images by using the Boundary Tracing Technique (경계선 기울기 방법을 이용한 다양한 인쇄체 한글의 인식)

  • 백승복;강순대;손영선
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.357-360
    • /
    • 2002
  • 본 논문에서는 CCD 흑백 카메라를 이용하여 입력되는 인쇄체 한글 이미지의 문자를 인식하여 편집 가능한 텍스트 문서로 변환하는 시스템을 구현하였다. 문자 인식에 있어서 잡음에 강한 경계선 기울기 방법을 이용함으로써 문자의 구조적 특성에 근거한 윤곽선 정보를 추출할 수 있었다. 이를 이용하여 각 문자 이미지의 수평 및 수직 모음을 인지하고 6가지 유형으로 분류한 후, 자소 단위로 분리하고 최대 길이투영을 사용하여 모음을 인식하였다 분리된 자음은 경계선이 변화되는 위상의 형태를 미리 저장된 표준패턴과 비교하여 인식하였다. 인식된 문자는 KS 한글 완성형 코드로 문서 편집기에 출력되어 사용자에 제공되는 시스템을 구현하였다.

Sentiment Classification of Movie Reviews using Levenshtein Distance (Levenshtein 거리를 이용한 영화평 감성 분류)

  • Ahn, Kwang-Mo;Kim, Yun-Suk;Kim, Young-Hoon;Seo, Young-Hoon
    • Journal of Digital Contents Society
    • /
    • v.14 no.4
    • /
    • pp.581-587
    • /
    • 2013
  • In this paper, we propose a method of sentiment classification which uses Levenshtein distance. We generate BOW(Bag-Of-Word) applying Levenshtein daistance in sentiment features and used it as the training set. Then the machine learning algorithms we used were SVMs(Support Vector Machines) and NB(Naive Bayes). As the data set, we gather 2,385 reviews of movies from an online movie community (Daum movie service). From the collected reviews, we pick sentiment words up manually and sorted 778 words. In the experiment, we perform the machine learning using previously generated BOW which was applied Levenshtein distance in sentiment words and then we evaluate the performance of classifier by a method, 10-fold-cross validation. As the result of evaluation, we got 85.46% using Multinomial Naive Bayes as the accuracy when the Levenshtein distance was 3. According to the result of the experiment, we proved that it is less affected to performance of the classification in spelling errors in documents.

Building Database using Character Recognition Technology (문자 인식 기술을 이용한 데이터베이스 구축)

  • Han, Seon-Hwa;Lee, Chung-Sik;Lee, Jun-Ho;Kim, Jin-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.7
    • /
    • pp.1713-1723
    • /
    • 1999
  • Optical character recognition(OCR) might be the most plausible method in building database out of printed matters. This paper describes the points to be considered when one selects an OCR system in order to build database. Based on the considerations, we evaluated four commercial OCR systems, and chose one which shows the best recognition rate to build OCT-text database. The subject text, the KT-test collection, is a set of abstracts from proceedings of different printing quality, fonts, and formats. KT-test collection is also provided with typed text database. Recognition rate was calculated by comparing the recognition result with the typed text. No preprocessing such as learning and slant correction was applied to the recognition process in order to simulate a practical environment. The result shows 90.5% of character recognition rate over 970 abstracts. This recognition rate is still insufficient for practical use. The errors in OCR texts are different from those of manually typed texts. In this paper, we classify the errors in OCR texts for the further research.

  • PDF

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Error Detection Method for Korean Compound Noun Decomposition (한국어 복합명사 분해 오류 탐지 기법)

  • Kang, Minkyu;Seungshik, Kang
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.181-185
    • /
    • 2009
  • 복합명사를 분해하는데 있어서 발생하는 분해오류들은 대부분 예외상황들로 취급된다. 전체적으로 차지하는 비중은 크지 않은데 오류 처리를 위해 들어가는 비용이 상대적으로 크기 때문이다. 하지만 분해된 데이터를 색인기나 문서분류기, 기계번역기 등에 실제로 적용해야 할 경우, 분해오류들을 수정해주어야 더 나은 성능을 보일 수 있기 때문에 분해오류를 찾아내고 수정하는 방법을 고안해야 한다. 본 논문에서는 복합명사 분해기에서 추출된 결과를 살펴보고, 주요 분해오류들이 가진 공통적인 특징을 파악하여 분해오류를 발견하는 방법을 생각해보고자 한다.

  • PDF