• Title/Summary/Keyword: 문서 분류기

Search Result 191, Processing Time 0.036 seconds

Selecting Multiple Query Examples for Active Learning (능동적 학습을 위한 복수 문의예제 선정)

  • 강재호;류광렬
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.541-543
    • /
    • 2004
  • 능동적 학습(active learning)은 제한된 시간과 인력으로 가능한 정확도가 높은 분류기(classifier)를 생성하기 위하여, 훈련집합에 추가할 예제 즉 문의예제(query example)의 선정과 확장된 훈련집합으로 다시 학습하는 과정을 반복하여 수행한다. 능동적 학습의 핵심은 사용자에게 카테고리(category) 부여를 요청할 문의예제를 선정하는 과정에 있다. 효과적인 문의예제를 선정하기 위하여 다양한 방안들이 제안되었으나, 이들은 매 문의단계마다 하나의 문의예제를 선정하는 경우에 가장 적합하도록 고안되었다. 능동적 학습이 복수의 예제를 사용자에게 문의할 수 있다면, 사용자는 문의예제들을 서로 비교해 가면서 작업할 수 있으므로 카테고리 부여작업을 보다 빠르고 정확하게 수행할 수 있을 것이다. 또한 충분한 인력을 보유한 상황에서는, 카테고리 부여작업을 병렬로 처리할 수 있어 전반적인 학습시간의 단축에 큰 도움이 될 것이다. 하지만, 각 예제의 문의예제로써의 적합 정도를 추정하면 유사한 예제들은 서로 비슷한 수준으로 평가되므로, 기존의 방안들을 복수의 문의예제 선정작업에 그대로 적용할 경우, 유사한 예제들이 문의예제로 동시에 선정되어 능동적 학습의 효율이 저하되는 현상이 나타날 수 있다. 본 논문에서는 특정 예제를 문의예제로 선정하면 이와 일정 수준이상 유사한 예제들은 해당 예제와 함께 문의예제로 선정하지 않음으로써, 이러한 문제점을 극복할 수 있는 방안을 제안한다. 제안한 방안을 문서분류 문제에 적용해 본 결과 기존 문의예제 선정방안으로 복수 문의예제를 선정할 때 발생할 수 있는 문제점을 상당히 완화시킬 있을 뿐 아니라, 복수의 문의예제를 선정하더라도 각 문의 단계마다 하나의 예제를 선정하는 경우에 비해 큰 성능의 저하가 없음을 실험적으로 확인하였다./$m\ell$로 나타났다.TEX>${HCO_3}^-$ 이온의 탈착은 서서히 진행되었다. R&D investment increases are directly not liked to R&D productivities because of delays and side effects during transition periods between different stages of technology development. Thus, It is necessary to develope strategies in order to enhance efficiency of technological development process by perceiving the switching pattern. 기여할 수 있을 것으로 기대된다. 것이다.'ity, and warm water discharges from a power plant, etc.h to the way to dispose heavy water adsorbent. Through this we could reduce solid waste products and the expense of permanent disposal of radioactive waste products and also we could contribute nuclear power plant run safely. According to the result we could keep the best condition of radiation safety super vision and we could help people believe in safety with Radioactivity wastes control for harmony with Environ

  • PDF

Recognition of Various Printed Hangul Images by using the Boundary Tracing Technique (경계선 기울기 방법을 이용한 다양한 인쇄체 한글의 인식)

  • Baek, Seung-Bok;Kang, Soon-Dae;Sohn, Young-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.1-5
    • /
    • 2003
  • In this paper, we realized a system that converts the character images of the printed Korean alphabet (Hangul) to the editable text documents by using the black and white CCD camera, We were able to abstract the contours information of the character which is based on the structural character by using the boundary tracing technique that is strong to the noise on the character recognition. By using the contours information, we recognized the horizontal vowels and vertical vowels of the character image and classify the character into the six patterns. After that, the character is divided to the unit of the consonant and vowel. The vowels are recognized by using the maximum length projection. The separated consonants are recognized by comparing the inputted pattern with the standard pattern that has the phase information of the boundary line change. We realized a system that the recognized characters are inputted to the word editor with the editable KS Hangul completion type code.

A Study on the Journalists in Busan during the Japanese Colonial Period (일제기 부산 지역 언론인 연구)

  • Chae, Baek
    • Korean journal of communication and information
    • /
    • v.56
    • /
    • pp.132-155
    • /
    • 2011
  • The aim of this study is to examine the Korean journalists in Busan during the Japanese colonial period. For this purpose this study analyze the managers of Busan branch of the Dong-A Daily News and Chosun Daily News. The personal history and ideological background of them show that the majority have the career of socialist or nationalist movement. In case of the Dong-A Daily News, at least five managers out of nine came from socialist movement. An Heeje and Kim Jongbeom of the Dong-A Daily News were a nationwide figure in nationalist and socialist movement. The ideological background of the managers of the Dong-A Daily News were more progressive than those of the Chosun Daily News. This difference of two newspapers seem to be resulted from the characteristic and social reputation of them. The activists of that time viewed the newspapers as the most effective instrument to approach to mass. And the executives of two newspaper companies also viewed these activists have advantages to the sales promotion of the newspapers.

  • PDF

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

An EXPRESS-to-XML Translator (EXPRESS 데이타를 XML 문서로 변환하는 번역기)

  • 이기호;김혜진
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.746-755
    • /
    • 2002
  • EXPRESS is product information description language. It is interpretable by human and software. Product data written in EXPRESS make it possible to exchange between heterogeneous systems. However, the number of software that can use EXPRESS is limited and it is expensive to use the software. XML makes it possible to update and manage data on the Web. Because the Web is easier to use and access than other tools comparatively, data represented by XML need not depend on specific applications or systems and it can be used for exchange of data. Therefore, if we represent EXPRESS-driven data in XML, there will be more active data exchange widely and easily In this work, a method of translation EXPRESS document to XML DTD and XML Schema is proposed. By classification all of EXPRESS syntax element and consideration complex cases caused by this syntax element, a translation rule that represent XML DTD and XML Schema is suggested. Also, a translator which is corresponding to this rule is implemented.

Contemporary Piracy in Southeast Asia and Somalia An Analysis of Causes, Effects, and Current Counter-Piracy Approaches (동남아시아와 소말리아의 해적 문제에 관한 연구 기원, 영향과 현재의 대해적 대응방안 고찰)

  • Chun, Kwang Ho
    • The Southeast Asian review
    • /
    • v.21 no.2
    • /
    • pp.293-327
    • /
    • 2011
  • 소말리아 해적 문제는 전례가 없는 단계에 다다랐다. 2010년 까지만 해도 445대가 넘는 선박이 해적들로부터 피해를 당했으며 1,181여명의 사람들이 몸값을 위해 인질이 되어야 했다. 그러나 소말리아만이 해적문제가 이슈화 되는 곳은 아니다. 지난 20년간 동남아시아의 해적문제도 큰 이슈가 되어 왔다. 본 논문은 해적 행위의 원인, 영향, 그리고 유형의 분류에 대한 분석을 위해 두 가지의 사례 연구를 통해 이를 살피려 한다. 각각의 해적 관련 사례가 서로 다른 특징들을 가지고 있으나현재 신문이나 인터넷 상의 보도뿐만 아니라 학문, 법률상의, 그리고 공식적 문서들에서 얻어지는 정보들을 이용해 분석한 결과 해적 행위의 원인은 대부분 육지에서 발견된다는 것으로 결론을 낼 수 있다. 본 논문을 통해 제 국가들은 경제, 안보, 지리적인 이유의 이해를 달리 하여 해적 행위를 근절하려 한다는 것을 살펴본다. 또한 현재의 해적 행위에 대한 대응적 접근은 전체론적으로 육지에서의 원인에 근거하여 다루어지는 것이 아니라 바다에서의 해양법 시행에 집중되어 있다. 이는 해적의 소탕을 위한 과정이라기보다는 여전히 문제를 내포하고 있을 수밖에 없다는 점을 지적 하고자 한다.

Construction of Onion Sentiment Dictionary using Cluster Analysis (군집분석을 이용한 양파 감성사전 구축)

  • Oh, Seungwon;Kim, Min Soo
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2917-2932
    • /
    • 2018
  • Many researches are accomplished as a result of the efforts of developing the production predicting model to solve the supply imbalance of onions which are vegetables very closely related to Korean food. But considering the possibility of storing onions, it is very difficult to solve the supply imbalance of onions only with predicting the production. So, this paper's purpose is trying to build a sentiment dictionary to predict the price of onions by using the internet articles which include the informations about the production of onions and various factors of the price, and these articles are very easy to access on our daily lives. Articles about onions are from 2012 to 2016, using TF-IDF for comparing with four kinds of TF-IDFs through the documents classification of wholesale prices of onions. As a result of classifying the positive/negative words for price by k-means clustering, DBSCAN (density based spatial cluster application with noise) clustering, GMM (Gaussian mixture model) clustering which are partitional clustering, GMM clustering is composed with three meaningful dictionaries. To compare the reasonability of these built dictionary, applying classified articles about the rise and drop of the price on logistic regression, and it shows 85.7% accuracy.

Design and Implementation of Web-based Problem Management System for CT Radiological Technologist Education (CT 전문방사선사 교육을 위한 웹기반 문항관리 시스템의 설계 및 구현)

  • Shin Yong-Won;Koo Bong-Oh;Shim Choon-Bo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.27-35
    • /
    • 2005
  • Recently, despite of the rapid progress of information technology in the medical and health fields, the development and management of problem sets about medical and education contents related with radiological technologist has been still achieved by manual and offline method using document editor. In this study, the unique web-based problem management system is designed and implemented. That system can efficiently manage and present various kind of problem set about integrated education and personal license without time and space limitations in order to improve the efficiency of supplementary training and to obtain the professional license for CT radiological technologist. The proposed system is composed of administration module and user module. The former supports several functions such as problem creation, problem categorization, user management, and adjustment of leveled assessment. On the other hand, the latter functions examination applying , problem retrieval, personal score retrieval, and interpretation viewing, and so on. In addition, our system is expected as a useful and practical system which provides problem interpretation and analysis of score results after applying for the examination. It can elevate ability of learning and information interchange among them preparing for CT professional radiological technologist licensing examination

  • PDF

The Research Trend Analysis of the Korean Journal of Physical Education using Mecab-ko Morphology Analyzer (Mecab-ko 형태소 분석을 이용한 한국체육학회지 연구동향 분석)

  • Park, Sung-Geon;Kim, Wanseop;Lee, Dae-Taek
    • 한국체육학회지인문사회과학편
    • /
    • v.56 no.6
    • /
    • pp.595-605
    • /
    • 2017
  • The purpose of this study is to investigate what kind of research fields are preferred by the researcher of the Korean Physical Education Society using the Mecab-ko morpheme analysis and whether there are differences in the interests of researchers between the humanities and social sciences and natural sciences. A total of the data collected for this study are 5,014 papers published online from March 2002 to March 2017 in the Korean Journal of Physical Education was collected. In this study, we used Mecab-ko morpheme analyzer to extract the keyword from the collected documents. As a result, the study found that the number of papers published in KAHPERD appeared to be decreasing. It was also that the main concern of researchers in KAHPERD toward was leisure, live sports and health were relatively higher than the improvement of performance. The research subjects that were interested in the research were women, middle-aged and elderly. The study found that researchers in the humanities and social sciences have shown interest in both traditional research and social interests, while researchers in the natural sciences have shown an interest in a deeper study of traditional research. In conclusion, in order to realize the revitalization of sports convergence research, it is necessary to establish standards for the field of study which should focus on the depth and breadth of research.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.