• Title/Summary/Keyword: 문서 자동분류

Search Result 311, Processing Time 0.031 seconds

Re-defining Named Entity Type for Personal Information De-identification and A Generation method of Training Data (개인정보 비식별화를 위한 개체명 유형 재정의와 학습데이터 생성 방법)

  • Choi, Jae-hoon;Cho, Sang-hyun;Kim, Min-ho;Kwon, Hyuk-chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.206-208
    • /
    • 2022
  • As the big data industry has recently developed significantly, interest in privacy violations caused by personal information leakage has increased. There have been attempts to automate this through named entity recognition in natural language processing. In this paper, named entity recognition data is constructed semi-automatically by identifying sentences with de-identification information from de-identification information in Korean Wikipedia. This can reduce the cost of learning about information that is not subject to de-identification compared to using general named entity recognition data. In addition, it has the advantage of minimizing additional systems based on rules and statistics to classify de-identification information in the output. The named entity recognition data proposed in this paper is classified into twelve categories. There are included de-identification information, such as medical records and family relationships. In the experiment using the generated dataset, KoELECTRA showed performance of 0.87796 and RoBERTa of 0.88.

  • PDF

A Document Collection Method for More Accurate Search Engine (정확도 높은 검색 엔진을 위한 문서 수집 방법)

  • Ha, Eun-Yong;Gwon, Hui-Yong;Hwang, Ho-Yeong
    • The KIPS Transactions:PartA
    • /
    • v.10A no.5
    • /
    • pp.469-478
    • /
    • 2003
  • Internet information search engines using web robots visit servers conneted to the Internet periodically or non-periodically. They extract and classify data collected according to their own method and construct their database, which are the basis of web information search engines. There procedure are repeated very frequently on the Web. Many search engine sites operate this processing strategically to become popular interneet portal sites which provede users ways how to information on the web. Web search engine contacts to thousands of thousands web servers and maintains its existed databases and navigates to get data about newly connected web servers. But these jobs are decided and conducted by search engines. They run web robots to collect data from web servers without knowledge on the states of web servers. Each search engine issues lots of requests and receives responses from web servers. This is one cause to increase internet traffic on the web. If each web server notify web robots about summary on its public documents and then each web robot runs collecting operations using this summary to the corresponding documents on the web servers, the unnecessary internet traffic is eliminated and also the accuracy of data on search engines will become higher. And the processing overhead concerned with web related jobs on web servers and search engines will become lower. In this paper, a monitoring system on the web server is designed and implemented, which monitors states of documents on the web server and summarizes changes of modified documents and sends the summary information to web robots which want to get documents from the web server. And an efficient web robot on the web search engine is also designed and implemented, which uses the notified summary and gets corresponding documents from the web servers and extracts index and updates its databases.

Categorization of POIs Using Word and Context information (관심 지점 명칭의 단어와 문맥 정보를 활용한 관심 지점의 분류)

  • Choi, Su Jeong;Park, Seong-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.470-476
    • /
    • 2014
  • A point of interest is a specific point location such as a cafe, a gallery, a shop, or a park. It consists of a name, a category, a location, and so on. Its information is necessary for location-based application, above all category is basic information. However, category information should be automatically gathered because it costs high to gather it manually. In this paper, we propose a novel method to estimate category of POIs automatically using an inner word and local context. An inner word is a word that contains POI's name. Their name sometimes expose category information. Thus, their name is used as inner word information in estimating category of POIs. Local context information means words around a POI's name in a document that mentioned the name. The context include information to estimate category. The evaluation of the proposed method is performed on two data sets. According to the experimental results, proposed model using combination inner word and local context show higher accuracy than that of model using each.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Method of Identifying Ownership of Personal Information exposed in Social Network Service (소셜 네트워크 서비스에 노출된 개인정보의 소유자 식별 방법)

  • Kim, Seok-Hyun;Cho, Jin-Man;Jin, Seung-Hun;Choi, Dae-Seon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.6
    • /
    • pp.1103-1110
    • /
    • 2013
  • This paper proposes a method of identifying ownership of personal information in Social Network Service. In detail, the proposed method automatically decides whether any location information mentioned in twitter indicates the publisher's residence area. Identifying ownership of personal information is necessary part of evaluating risk of opened personal information online. The proposed method uses a set of decision rules that considers 13 features that are lexicographic and syntactic characteristics of the tweet sentences. In an experiment using real twitter data, the proposed method shows better performance (f1-score: 0.876) than the conventional document classification models such as naive bayesian that uses n-gram as a feature set.

Development of a Framework for Semi-automatic Building Test Collection Specialized in Evaluating Relation Extraction between Technical Terminologies (기술용어 간 관계추출의 성능평가를 위한 반자동 테스트 컬렉션 구축 프레임워크 개발)

  • Jeong, Chang-Hoo;Choi, Sung-Pil;Lee, Min-Ho;Choi, Yun-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.2
    • /
    • pp.481-489
    • /
    • 2010
  • Due to the increase of the attention on relation extraction systems, the construction of test collections for assessing their performance has emerged as an important task. In this paper, we propose semi-automatic framework capable of constructing test collections for relation extraction on a large scale. Based on this framework, we develop a test collection which can assess the performance of various approaches to extracting relations between technical terminologies in scientific literatures. This framework can minimize the cost of constructing this kind of collections and reduce the intrinsic fluctuations which may come from the diversity in characteristics of collection developers. Furthermore, we can construct balanced and objective collections by means of controlling the selection process of seed documents and terminologies using the proposed framework.

Concept Classification System of Jeju Oreum based on Web Search (웹 검색 기반으로 한 제주 오름의 콘셉트 분류 시스템)

  • Ahn, Jinhyun;Byun, So-Young;Woo, Seo-Jung;An, Ye-Ji;Kang, Jungwoon;Kim, Mincheol
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.235-240
    • /
    • 2021
  • Currently, the number of visitors to Oreum is increasing and the trend of tourism is changing rapidly. The motivation for visiting Oreum is also changing from relaxation and pleasure to experiences. In line with this change, people visit the mountain by selecting motivation such as marriage and family photos, not just exercise. However, it is difficult to search for an Oreum that matches the tourists' motivation. In order to solve these problems, we proposed a system that provides the association between Oreum and concept based on the number of search results from web search engines in real time. User can select the desired date to check the associations for past or selected periods and concepts. Through this research, visitors to Oreum, Jeju's natural heritage, can contribute to the development of tourism in Jeju. In the future, the concept of visiting beaches or seas, not just Jeju Oreum, can be provided. In this work, search results from websites are collected, stored in a database, and search results of Oreum and concept are provided on the homepage to classify Oreum trends.

Decision of the Korean Speech Act using Feature Selection Method (자질 선택 기법을 이용한 한국어 화행 결정)

  • 김경선;서정연
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.278-284
    • /
    • 2003
  • Speech act is the speaker's intentions indicated through utterances. It is important for understanding natural language dialogues and generating responses. This paper proposes the method of two stage that increases the performance of the korean speech act decision. The first stage is to select features from the part of speech results in sentence and from the context that uses previous speech acts. We use x$^2$ statistics(CHI) for selecting features that have showed high performance in text categorization. The second stage is to determine speech act with selected features and Neural Network. The proposed method shows the possibility of automatic speech act decision using only POS results, makes good performance by using the higher informative features and speed up by decreasing the number of features. We tested the system using our proposed method in Korean dialogue corpus transcribed from recording in real fields, and this corpus consists of 10,285 utterances and 17 speech acts. We trained it with 8,349 utterances and have test it with 1,936 utterances, obtained the correct speech act for 1,709 utterances(88.3%). This result is about 8% higher accuracy than without selecting features.

Recommender System using Association Rule and Collaborative Filtering (연관 규칙과 협력적 여과 방식을 이용한 추천 시스템)

  • 이기현;고병진;조근식
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.05a
    • /
    • pp.265-272
    • /
    • 2002
  • 기존의 인터넷 웹사이트에서는 사용자의 만족을 극대화시키기 위하여 사용자별로 개인화 된 서비스를 제공하는 협력적 필터링 방식을 적용하고 있다 협력적 여과 기술은 비슷한 선호도를 가지는 사용자들과의 상관관계를 기반으로 취향에 맞는 아이템을 예측하여 특정 사용자에게 추천하여준다. 그러나 협력적 필터링은 추천을 받기 위해서 특정 수 이상의 아이템에 대한 평가를 요구하며, 또한 전체 사용자에 대해 단지 비슷한 선호도를 가지는 일부 사용자 정보에 의지하여 추천함으로써 나머지 사용자 정보를 무시하는 경향이 있다. 그러나 나머지 사용자 정보에도 추천을 위한 유용한 정보가 숨겨져 있다. 우리는 이러한 숨겨진 유용한 추천 정보를 발견하기 위하여 본 논문에서는 협력적 여과 방식과 함께 데이터 마이닝(Data Mining)에서 사용되는 연관 규칙(Association Rule)을 추천에 사용한다. 연관 규칙은 한 항목 그룹과 다른 항목 그룹 사이에 존재하는 연관성을 규칙(Rule)의 형태로 표현한 것이다. 이와 같이 생성된 연관 규칙은 개인 구매도 분석, 상품의 교차 매매(Cross-Marketing), 카탈로그 디자인, 염가 매출품(Loss Leader)분석, 상품 진열, 구매 성향에 따른 고객 분류 다양하게 사용되고 있다. 그러나 이런 연관 규칙은 추천 시스템에서 잘 응용되지 못하고 있는 실정이다. 본 논문에서 우리는 연관 규칙을 추천 시스템에 적용해, 항목 그룹 사이에 연관성을 유도함으로써 추천에 효율적으로 사용할 수 있음을 보였다. 즉 전체 사용자의 히스토리(History) 정보를 기반으로 아이템 사이의 연관 규칙을 유도하고 협력적 여과 방식과 함께 보조적으로 연관 규칙을 추천을 위해 사용함으로써 추천 시스템에 효율성을 높였다. 구축, 각종 전자문서 생성, 전자 결제, 온라인 보험 가입, 해운 선용품 판매 및 관련 정보 제공 등 해운 거래를 위한 종합적인 서비스가 제공되어야 한다. 이를 위해, 본문에서는 e-Marketplace의 효율적인 연계 방안에 대해 해운 관련 업종별로 제시하고 있다. 리스트 제공형, 중개형, 협력형, 보완형, 정보 연계형 등이 있는데, 이는 해운 분야에서 사이버 해운 거래가 가지는 문제점들을 보완하고 업종간 협업체제를 이루어 원활한 거래를 유도할 것이다. 그리하여 우리나라가 동북아 지역뿐만 아니라 세계적인 해운 국가 및 물류 ·정보 중심지로 성장할 수 있는 여건을 구축하는데 기여할 것이다. 나타내었다.약 1주일간의 포르말린 고정이 끝난 소장 및 대장을 부위별, 별 종양개수 및 분포를 자동영상분석기(Kontron Co. Ltd., Germany)로 분석하였다. 체의 변화, 장기무게, 사료소비량 및 마리당 종양의 개수에 대한 통계학적 유의성 검증을 위하여 Duncan's t-test로 통계처리 하였고, 종양 발생빈도에 대하여는 Likelihood ration Chi-square test로 유의성을 검증하였다. C57BL/6J-Apc$^{min/+}$계 수컷 이형접합체 형질전환 마우스에 AIN-76A 정제사료만을 투여한 대조군의 대장선종의 발생률은 84%(Group 3; 21/25례)로써 I3C 100ppm 및 300ppm을 투여한 경우에 있어서는 각군 모두 60%(Group 1; 12/20 례, Group 2; 15/25 례)로 감소하는 경향을 나타내었다. 대장선종의 마리당 발생개수에 있어서는 C57BL/6J-Apc$^{min/+}$계 수컷 이형접합체 형질전환 마우스에 AIN-76A 정제사료

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.