• Title/Summary/Keyword: Intelligent document processing

Search Result 44, Processing Time 0.021 seconds

Unsupervised Document Clustering for Constructing User Profile of Web Agent (웹 에이전트 사용자 특성모델 구축을 위한 비감독 문서 분류)

  • 오재준;박영택
    • Journal of Intelligence and Information Systems
    • /
    • v.4 no.2
    • /
    • pp.61-83
    • /
    • 1998
  • 본 연구는 웹 에이전트에 있어서 가장 핵심적인 부분이라 할 수 있는 사용자 특성모델 구축방법을 개선하는데 목적을 두고 있다. 사용자 특성모델을 귀납적 기계학습 방식으로 자동 추출하기 위해서는 사용자가 관심을 가지는 분야별로 문서를 자동 분류하는 작업이 매우 중요하다 지금까지의 방식은 사람이 관심여부에 따라 문서를 수동적으로 분류해 왔으나, 문서의 양이 기하급수적으로 증가할 경우 처리할 수 있는 문서의 양에는 한계가 있을 수밖에 없다. 또한 수작업 문서분류 방식을 웹 에이전트에 그대로 적용하였을 경우 사용자가 일일이 문서를 분류해야한다는 번거로움으로 인해 웹 에이전트의 효용성이 반감될 것이다. 따라서 본 연구에서는 비감독 문서분류 알고리즘과 그것을 바탕으로 얻어진 문서분류정보를 후처리(Post-Processing)함으로써 보다 간결하고 정확한 문서분류 결과를 얻을 수 있는 구체적인 방법을 제공하고자 한다.

  • PDF

Efficient Object Classification Scheme for Scanned Educational Book Image (교육용 도서 영상을 위한 효과적인 객체 자동 분류 기술)

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Young-Woon;Lee, Jong-Hyeok;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1323-1331
    • /
    • 2017
  • Despite the fact that the copyright has grown into a large-scale business, there are many constant problems especially in image copyright. In this study, we propose an automatic object extraction and classification system for the scanned educational book image by combining document image processing and intelligent information technology like deep learning. First, the proposed technology removes noise component and then performs a visual attention assessment-based region separation. Then we carry out grouping operation based on extracted block areas and categorize each block as a picture or a character area. Finally, the caption area is extracted by searching around the classified picture area. As a result of the performance evaluation, it can be seen an average accuracy of 83% in the extraction of the image and caption area. For only image region detection, up-to 97% of accuracy is verified.

The Study on Dynamic Images Processing for Finger Languages (지화 인식을 위한 동영상 처리에 관한 연구)

  • Kang, Min-Ji;Choi, Eun-Sook;Sohn, Young-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.184-189
    • /
    • 2004
  • In this paper, we realized a system that receives the dynamic images of finger languages, which is the method of intention transmission of the hearing disabled person, using the white and black CCD camera, and that recognizes the images and converts them to the editable text document. We use the afterimage to draw a sharp line between indistinct images and clear images from a series of inputted images, and get the character alphabet from the away of continuous images and output the accomplished character to the word editor by applying the automata theory. After the system removes the varied wrist part from the data of clean image, it gets the controid point of hand by the maximum circular movement method and recognizes the hand that is necessary to analyze the finger languages by applying the circular pattern vector algorithm. The system abstracts the characteristic vectors of the hand using the distance spectrum from the center of the hand and it compares the characteristic vector of inputted pattern from the standard pattern by applying the fuzzy inference and recognizes the movement of finger languages.

A Leveling and Similarity Measure using Extended AHP of Fuzzy Term in Information System (정보시스템에서 퍼지용어의 확장된 AHP를 사용한 레벨화와 유사성 측정)

  • Ryu, Kyung-Hyun;Chung, Hwan-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.212-217
    • /
    • 2009
  • There are rule-based learning method and statistic based learning method and so on which represent learning method for hierarchy relation between domain term. In this paper, we propose to leveling and similarity measure using the extended AHP of fuzzy term in Information system. In the proposed method, we extract fuzzy term in document and categorize ontology structure about it and level priority of fuzzy term using the extended AHP for specificity of fuzzy term. the extended AHP integrates multiple decision-maker for weighted value and relative importance of fuzzy term. and compute semantic similarity of fuzzy term using min operation of fuzzy set, dice's coefficient and Min+dice's coefficient method. and determine final alternative fuzzy term. after that compare with three similarity measure. we can see the fact that the proposed method is more definite than classification performance of the conventional methods and will apply in Natural language processing field.

A Study On The Application of RPA(Robotics Process Automation) For Productivity Of Business Documents (비즈니스 문서의 생산성 향상을 위한 RPA(Robotics Process Automation)적용방안에 대한 연구)

  • Hyun, Young Geun;Lee, Joo Yeoun
    • Journal of Digital Convergence
    • /
    • v.17 no.9
    • /
    • pp.199-212
    • /
    • 2019
  • Digitalization is creating a variety of changes and innovations in our business environment. In manufacturing, robots have long been used for automation to innovate processing speed and quality. The RPA brings these innovations in manufacturing sites to the office space. The purpose of this study is to improve productivity for simple, repetitive tasks in these office space. For identify the potential of automation related to productivity improvement, I looked at the concept of business automation, and then simulated the five areas of business documentation works with agile methodology. In conclusion, I confirmed that productivity improvement of 97.3% in quality inspection and 31.7% in editorial design is possible, and examined the direction to apply to actual work. Based on these results, future study will explore the application of Intelligent Process Automation (IPA).

A Design and Implementation of A Robot Client Middleware for Network-based Intelligent Robot based on Service-Oriented (지능형 네트워크 로봇을 위한 서비스 지향적인 로봇 클라이언트 미들웨어 설계와 구현)

  • Kwak, Dong-Gyu;Choi, Jae-Young
    • The KIPS Transactions:PartA
    • /
    • v.19A no.1
    • /
    • pp.1-8
    • /
    • 2012
  • Network-based intelligent robot is connected with network system, provides interactions with humans, and carries out its own roles on ubiquitous computing environments. URC (Ubiquitous Robot Companion) robot has been proposed to develop network-based robot by applying distributed computing techniques. On URC robot, it is possible to save the computing power of robot client by environments, has been proposed to develop robot software using service-oriented architecture on server-client computing environments. The SOMAR client robot consists of two layers - device service layer and robot service layer. The device service controls physical devices, and the robot service abstracts robot's services, which are newly defined and generated by combining many device services. RSEL (Robot Service Executing Language) is defined in this paper to represent relations and connections between device services and robot services. A RESL document, including robot services by combining several device services, is translated to a programming language for robot client system using RSEL translator, then the translated source program is compiled and uploaded to robot client system with RPC (Remote Procedure Call) command. A SOMAR client system is easy to be applied to embedded systems of host/target architecture. Moreover it is possible to produce a light-weight URC client robot by reducing workload of RSEL processing engine.

eXtensible Rule Markup Language (XRML): Design Principles and Application (확장형 규칙 표식 언어(eXtensible Rule Markup Language) : 설계 원리 및 응용)

  • 이재규;손미애;강주영
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.141-157
    • /
    • 2002
  • extensible Markup Language (XML) is a new markup language for data exchange on the Internet. In this paper, we propose a language extensible Rule Markup Language (XRML) which is an extension of XML. The implicit rules embedded in the Web pages should be identifiable, interchangeable with structured rule format, and finally accessible by various applications. It is possible to realize by using XRML. In this light, Web based Knowledge Management Systems (KMS) can be integrated with rule-based expert systems. To meet this end, we propose the six design criteria: Expressional Completeness, Relevance Linkability, Polymorphous Consistency, Applicative Universality, Knowledge Integrability and Interoperability. Furthermore, we propose three components such as RIML (Rule Identification Markup Language), RSML (Rule Structure Markup Language) and RTML (Rule Triggering Markup Language), and the Document Type Definition DTD). We have designed the XRML version 0.5 as illustrated above, and developed its prototype named Form/XRML which is an automated form processing for disbursement of the research fund in the Korea Advanced Institute of Science and Technology (KAISI). Since XRML allows both human and software agent to use the rules, there is huge application potential. We expect that XRML can contribute to the progress of Semantic Web platforms making knowledge management and e-commerce more intelligent. Since there are many emerging research groups and vendors who investigate this issue, it will not take long to see XRML commercial products. Matured XRML applications may change the way of designing information and knowledge systems in the near future.

  • PDF

A Study on the Intelligent Document Processing Platform for Document Data Informatization (문서 데이터 정보화를 위한 지능형 문서처리 플랫폼에 관한 연구)

  • Hee-Do Heo;Dong-Koo Kang;Young-Soo Kim;Sam-Hyun Chun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.89-95
    • /
    • 2024
  • Nowadays, the competitiveness of a company depends on the ability of all organizational members to share and utilize the organizational knowledge accumulated by the organization. As if to prove this, the world is now focusing on ChetGPT service using generative AI technology based on LLM (Large Language Model). However, it is still difficult to apply the ChetGPT service to work because there are many hallucinogenic problems. To solve this problem, sLLM (Lightweight Large Language Model) technology is being proposed as an alternative. In order to construct sLLM, corporate data is essential. Corporate data is the organization's ERP data and the company's office document knowledge data preserved by the organization. ERP Data can be used by directly connecting to sLLM, but office documents are stored in file format and must be converted to data format to be used by connecting to sLLM. In addition, there are too many technical limitations to utilize office documents stored in file format as organizational knowledge information. This study proposes a method of storing office documents in DB format rather than file format, allowing companies to utilize already accumulated office documents as an organizational knowledge system, and providing office documents in data form to the company's SLLM. We aim to contribute to improving corporate competitiveness by combining AI technology.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.