• Title/Summary/Keyword: document classification

Search Result 451, Processing Time 0.045 seconds

The Retained Documents Disposal Project and Reorganization of National Records Management System(1968~1979): Focused on Reorganization the Government Document Classification Scheme and Criteria for Retention Period (보존문서정리작업과 국가기록관리체계의 개편(1968~1979) - 공문서분류표와 보존연한책정기준의 개편을 중심으로 -)

  • Lee, Seung-Il;Lee, Sang-Hun
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.8 no.1
    • /
    • pp.65-96
    • /
    • 2008
  • The Retained documents disposal project, promoted in 1968 and 1975, was executed as part of administrative plans for prompt movement of the government in case of national emergency. It was main contents that archival documents were re-appraised and reduced the least, and then moved rearward. The Retained documents disposal project denied The Records Management System of Korean Government that had been founded in 1964, promoted reduction of documents physically for convenience to dispersion of the government. Korean Government had completed these projects and then reorganized records management system to reduce creation of archival documents systematically in 1979.

Sentence Cohesion & Subject driving Keywords Extraction for Document Classification (문서 분류를 위한 문장 응집도와 주어 주도의 주제어 추출)

  • Ahn Heui-Kook;Roh Hi-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.463-465
    • /
    • 2005
  • 문서분류 시 문서의 내용을 표현하기 위한 자질로서 사용되는 단어의 출현빈도정보는 해당 문서의 주제어를 표현하기에 취약한 점을 갖고 있다. 즉, 키워드가 문장에서 어떠한 목적(의미)으로 사용되었는지에 대한 정보를 표현할 수가 없고, 문장 간의 응집도가 강한 문장에서 추출되었는지 아닌지에 대한 정보를 표현할 수가 없다. 따라서, 이 정보로부터 문서분류를 하는 것은 그 정확도에 있어서 한계를 갖게 된다. 본 논문에서는 이러한 문서표현의 문제를 해결하기위해, 키워드를 선택할 때, 자질로서 문장의 역할(주어)정보를 추출하여 가중치 부여방식을 통하여 주어주도정보량을 추출하였다. 또한, 자질로서 문장 내 키워드들의 동시출현빈도 정보를 추출하여 문장 간 키워드들의 연관성정도를 시소러스에 담아내었다. 그리고, 이로부터 응집도 정보를 추출하였다. 이 두 정보의 통합으로부터 문서 주제어를 결정함으로서, 문서분류를 위한 주제어 추출 시 불필요한 키워드의 삽입을 줄이고, 동시 출현하는 키워드들에 대한 선택 기준을 제공하고자 하였다. 실험을 통해 한번 출현한 키워드라도, 문장을 주도하는 주어로서 사용될 경우와 응집도 가중치가 높을 경우에 주제어로서의 선택될 가능성이 향상되고, 문서분류를 위해 좀 더 세분화된 키워드 점수화가 가능함을 확인하였다. 따라서, 선택된 주제어가 문서분류의 정확도에 있어서 향상을 가져올 수 있을 것으로 기대한다.

  • PDF

A Study on the Ship Design of a new ICLL for the 21st Century (21세기 국제만재흘수선협약에 따른 선박설계의 연구)

  • Park M.K.;Kwon Y.J.
    • Journal of Korean Port Research
    • /
    • v.7 no.1
    • /
    • pp.89-114
    • /
    • 1993
  • ICLL 66 is the most widely ratified instrument of the IMO and is, along with the International Convention on Safety of life at Sea (SOLAS), the primary document setting forth internationally agreed ship safety standards. ICLL 66 set freeboard requirement based on experience gained from the first Load Line Convention in 1930 and on contemporary developments in ship design. Reexamination of ICLL 66 is indicated by the proliferation of novel ship designs for which it lacks adequate regulations and by significant advancements in analytical seakeeping and deck wetness prediction techniques now available to the designer. In this paper, the Freeboard Advisory Group reviews these issues against the changing climate of the marine industry and maritime administrations, discusses the state of the art in analytical seakeeping programs, and outlines a series of recommendations for the establishment of a new international load line convention for the next century. The steps needs for an international program at IMO are discussed and a new convention is proposed.

  • PDF

Information Technology Application for Oral Document Analysis (구술문서 자료분석을 위한 정보검색기술의 응용)

  • Park, Soon-Cheol;Hahm, Han-Hee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.2
    • /
    • pp.47-55
    • /
    • 2008
  • The purpose of this paper is to develop an analytical methodology of or릴 documents by the application of. Information Technologies. This system consists of the key word search, contents summary, clustering, classification & topic tracing of the contents. The integrated model of the five levels of retrieval technologies can be exhaustively used in the analysis of oral documents, which were collected as oral history of five men and women in the area of North Jeolla. Of the five methods topic tracing is the most pioneering accomplishment both home and abroad. In final this research will shed light on the methodological and theoretical studies of oral history and culture.

  • PDF

Patent Document Classification by Using Hierarchical Attention Network (계층적 주의 네트워크를 활용한 특허 문서 분류)

  • Jang, Hyuncheol;Han, Donghee;Ryu, Teaseon;Jang, Hyungkuk;Lim, HeuiSeok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.369-372
    • /
    • 2018
  • 최근 지식경영에 있어 특허를 통한 지식재산권 확보는 기업 운영에 큰 영향을 주는 요소이다. 성공적인 특허 확보를 위해서, 먼저 변화하는 특허 분류 제계를 이해하고, 방대한 특허 정보 데이터를 빠르고 신속하게 특허 분류 체계에 따라 분류화 시킬 필요가 있다. 본 연구에서는 머신 러닝 기술 중에서도 계층적 주의 네트워크를 활용하여 특허 자료의 초록을 학습시켜 분류를 할 수 있는 방법을 제안한다. 그리고 본 연구에서는 제안된 계층적 주의 네트워크의 성능을 검증하기 위해 수정된 입력데이터와 다른 워드 임베딩을 활용하여 진행하였다. 이를 통하여 특허 문서 분류에 활용하려는 계층적 주의 네트워크의 성능과 특허 문서 분류 활용화 방안을 보여주고자 한다. 본 연구의 결과는 많은 기업 지식경영에서 실용적으로 활용할 수 있도록 지식경영 연구자, 기업의 관리자 및 실무자에게 유용한 특허분류기법에 관한 이론적 실무적 활용 방안을 제시한다.

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.

SMS Text Messages Filtering using Word Embedding and Deep Learning Techniques (워드 임베딩과 딥러닝 기법을 이용한 SMS 문자 메시지 필터링)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.24-29
    • /
    • 2018
  • Text analysis technique for natural language processing in deep learning represents words in vector form through word embedding. In this paper, we propose a method of constructing a document vector and classifying it into spam and normal text message, using word embedding and deep learning method. Automatic spacing applied in the preprocessing process ensures that words with similar context are adjacently represented in vector space. Additionally, the intentional word formation errors with non-alphabetic or extraordinary characters are designed to avoid being blocked by spam message filter. Two embedding algorithms, CBOW and skip grams, are used to produce the sentence vector and the performance and the accuracy of deep learning based spam filter model are measured by comparing to those of SVM Light.

Research on Subjective-type Grading System Using Syntactic-Semantic Tree Comparator (구문의미트리 비교기를 이용한 주관식 문항 채점 시스템에 대한 연구)

  • Kang, WonSeog
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.6
    • /
    • pp.83-92
    • /
    • 2018
  • The subjective question is appropriate for evaluation of deep thinking, but it is not easy to score. Since, regardless of same scoring criterion, the graders are able to produce different scores, we need the objective automatic evaluation system. However, the system has the problem of Korean analysis and comparison. This paper suggests the Korean syntactic analysis and subjective grading system using the syntactic-semantic tree comparator. This system is the hybrid grading system of word based and syntactic-semantic tree based grading. This system grades the answers on the subjective question using the syntactic-semantic comparator. This proposed system has the good result. This system will be utilized in Korean syntactic-semantic analysis, subjective question grading, and document classification.

Design and Implementation of Web Crawler utilizing Unstructured data

  • Tanvir, Ahmed Md.;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.374-385
    • /
    • 2019
  • A Web Crawler is a program, which is commonly used by search engines to find the new brainchild on the internet. The use of crawlers has made the web easier for users. In this paper, we have used unstructured data by structuralization to collect data from the web pages. Our system is able to choose the word near our keyword in more than one document using unstructured way. Neighbor data were collected on the keyword through word2vec. The system goal is filtered at the data acquisition level and for a large taxonomy. The main problem in text taxonomy is how to improve the classification accuracy. In order to improve the accuracy, we propose a new weighting method of TF-IDF. In this paper, we modified TF-algorithm to calculate the accuracy of unstructured data. Finally, our system proposes a competent web pages search crawling algorithm, which is derived from TF-IDF and RL Web search algorithm to enhance the searching efficiency of the relevant information. In this paper, an attempt has been made to research and examine the work nature of crawlers and crawling algorithms in search engines for efficient information retrieval.

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.