• Title/Summary/Keyword: ontology development methodology

Search Result 37, Processing Time 0.023 seconds

Suggestions for the Development of RegTech Based Ontology and Deep Learning Technology to Interpret Capital Market Regulations (레그테크 기반의 자본시장 규제 해석 온톨로지 및 딥러닝 기술 개발을 위한 제언)

  • Choi, Seung Uk;Kwon, Oh Byung
    • The Journal of Information Systems
    • /
    • v.30 no.1
    • /
    • pp.65-84
    • /
    • 2021
  • Purpose Based on the development of artificial intelligence and big data technologies, the RegTech has been emerged to reduce regulatory costs and to enable efficient supervision by regulatory bodies. The word RegTech is a combination of regulation and technology, which means using the technological methods to facilitate the implementation of regulations and to make efficient surveillance and supervision of regulations. The purpose of this study is to describe the recent adoption of RegTech and to provide basic examples of applying RegTech to capital market regulations. Design/methodology/approach English-based ontology and deep learning technologies are quite developed in practice, and it will not be difficult to expand it to European or Latin American languages that are grammatically similar to English. However, it is not easy to use it in most Asian languages such as Korean, which have different grammatical rules. In addition, in the early stages of adoption, companies, financial institutions and regulators will not be familiar with this machine-based reporting system. There is a need to establish an ecosystem which facilitates the adoption of RegTech by consulting and supporting the stakeholders. In this paper, we provide a simple example that shows a procedure of applying RegTech to recognize and interpret Korean language-based capital market regulations. Specifically, we present the process of converting sentences in regulations into a meta-language through the morpheme analyses. We next conduct deep learning analyses to determine whether a regulatory sentence exists in each regulatory paragraph. Findings This study illustrates the applicability of RegTech-based ontology and deep learning technologies in Korean-based capital market regulations.

Product Data Interoperability based on Layered Reference Ontology (계층적 참조 온톨로지 기반의 제품정보 간 상호운용성 확보)

  • Seo, Won-Chul;Lee, Sun-Jae;Kim, Byung-In;Lee, Jae-Yeol;Kim, Kwang-Soo
    • The Journal of Society for e-Business Studies
    • /
    • v.11 no.3
    • /
    • pp.53-71
    • /
    • 2006
  • In order to cope with the rapidly changing product development environment, individual manufacturing enterprises are forced to collaborate with each other through establishing a virtual organization. In collaboration, designated organizations work together for mutual gain based on product data interoperability. However, product data interoperability is not fully facilitated due to semantic inconsistency among product data models of individual enterprises. In order to overcome the semantic inconsistency problem, this paper proposes a reference ontology, Reference Domain Ontology(RDO), and a methodology for product data interoperability with semantic consistency using RDO. RDO describes semantics of product data model and metamodel for all application domains in a virtual organization. Using RDO, application domains in a virtual organization can easily understand the product data models of others. RDO is agile and temporal such that it is created with the formation of a virtual organization, copes with changes of the organization, and disappears with the vanishment of the organization. RDO is built by a hybrid approach of top-down using a upper ontology and bottom-up based on the merging of ontologies of application domains in a virtual organization. With this methodology, every domain in a virtual organization can achieve product data model interoperability without model transformation.

  • PDF

National Defense Domain Ontology Development Using Mixed Ontology Building Methodology (MOBM) (혼합형 온톨로지 구축방법론을 이용한 국방온톨로지 구축)

  • Ra, Minyoung;Yoo, Donghee;No, Sungchun;Shin, Jinhee;Han, Changhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.279-282
    • /
    • 2012
  • 본 연구에서는 혼합형 온톨로지 구축방법론을 이용하여 ATCIS 체계에 활용 가능한 국방온톨로지의 구축 과정을 보여주고자 한다. 이를 위해, 실제 ATCIS의 데이터베이스 정보들을 활용하였고 해당 방법론이 ATCIS 체계에 적용될 때 추가적으로 고려해야 할 사항들을 함께 분석하였다. 이러한 연구 결과는 향후 보다 실용적인 국방온톨로지 구축을 위한 기반 자료로 활용될 것으로 기대된다.

Design and Construction of a NLP Based Knowledge Extraction Methodology in the Medical Domain Applied to Clinical Information

  • Moreno, Denis Cedeno;Vargas-Lombardo, Miguel
    • Healthcare Informatics Research
    • /
    • v.24 no.4
    • /
    • pp.376-380
    • /
    • 2018
  • Objectives: This research presents the design and development of a software architecture using natural language processing tools and the use of an ontology of knowledge as a knowledge base. Methods: The software extracts, manages and represents the knowledge of a text in natural language. A corpus of more than 200 medical domain documents from the general medicine and palliative care areas was validated, demonstrating relevant knowledge elements for physicians. Results: Indicators for precision, recall and F-measure were applied. An ontology was created called the knowledge elements of the medical domain to manipulate patient information, which can be read or accessed from any other software platform. Conclusions: The developed software architecture extracts the medical knowledge of the clinical histories of patients from two different corpora. The architecture was validated using the metrics of information extraction systems.

Syntactic and semantic information extraction from NPP procedures utilizing natural language processing integrated with rules

  • Choi, Yongsun;Nguyen, Minh Duc;Kerr, Thomas N. Jr.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.866-878
    • /
    • 2021
  • Procedures play a key role in ensuring safe operation at nuclear power plants (NPPs). Development and maintenance of a large number of procedures reflecting the best knowledge available in all relevant areas is a complex job. This paper introduces a newly developed methodology and the implemented software, called iExtractor, for the extraction of syntactic and semantic information from NPP procedures utilizing natural language processing (NLP)-based technologies. The steps of the iExtractor integrated with sets of rules and an ontology for NPPs are described in detail with examples. Case study results of the iExtractor applied to selected procedures of a U.S. commercial NPP are also introduced. It is shown that the iExtractor can provide overall comprehension of the analyzed procedures and indicate parts of procedures that need improvement. The rich information extracted from procedures could be further utilized as a basis for their enhanced management.

A Multi-Agent Approach to Context-Aware Optimization for Personalized Mobile Web Service (상황인지 기반 최적화가 가능한 개인화된 모바일 웹서비스 구축을 위한 다중에이전트 접근법에 관한 연구)

  • Kwon Oh-byung;Lee Ju-chul
    • Korean Management Science Review
    • /
    • v.21 no.3
    • /
    • pp.23-38
    • /
    • 2004
  • Recently the usage of mobile devices which enable the accessibility to Internet has been dramatically increased. Most of the mobile services, however, so far tend to be simple such as infotainment service. In order to fully taking advantage of wireless network and corresponding technology, personalized web service based on user's context could be needed. Meanwhile, optimization techniques have been vitally incorporated for optimizing the development and administration of electronic commerce. However, applying context-aware optimization mechanism to personalized mobile services is still very few. Hence, the purpose of this paper is to propose a methodology to incorporate optimization techniques into personalization services. Multi agent-based web service approach is considered to realize the methodology. To show the feasibility of the methodology proposed in this paper, a prototype system, CAMA-myOPt(Context-Aware Multi-Agent system for my Optimization), was implemented and adopted in mobile comparative shopping.

Practical Text Mining for Trend Analysis: Ontology to visualization in Aerospace Technology

  • Kim, Yoosin;Ju, Yeonjin;Hong, SeongGwan;Jeong, Seung Ryul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.4133-4145
    • /
    • 2017
  • Advances in science and technology are driving us to the better life but also forcing us to make more investment at the same time. Therefore, the government has provided the investment to carry on the promising futuristic technology successfully. Indeed, a lot of resources from the government have supported into the science and technology R&D projects for several decades. However, the performance of the public investments remains unclear in many ways, so thus it is required that planning and evaluation about the new investment should be on data driven decision with fact based evidence. In this regard, the government wanted to know the trend and issue of the science and technology with evidences, and has accumulated an amount of database about the science and technology such as research papers, patents, project reports, and R&D information. Nowadays, the database is supporting to various activities such as planning policy, budget allocation, and investment evaluation for the science and technology but the information quality is not reached to the expectation because of limitations of text mining to drill out the information from the unstructured data like the reports and papers. To solve the problem, this study proposes a practical text mining methodology for the science and technology trend analysis, in case of aerospace technology, and conduct text mining methods such as ontology development, topic analysis, network analysis and their visualization.

A Conversion from HTML5 to OWL Ontology (HTML5 문서로부터 OWL 온톨로지 구축 기법)

  • Sun, Taimao;Yoon, Yiyeon;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.3
    • /
    • pp.143-158
    • /
    • 2013
  • HTML5, new standard for web language, is being standardized corresponding to the development of web. Since several new semantic elements have been added into HTML5 standard, current Web Environment is becoming more and more semantic. In order to provide better user experience by using information extraction from HTML5 page, new HTML5 Elements should be mapped to a corresponding Ontology. In this research, we will focus on new semantic elements to build Ontology from HTML5 document. For this purpose, we will propose a methodology of Schema level mapping rule and instance mapping rule.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Development of Methodology for Automated Office Room Generation Based on Space Utilization (공간 사용률 기반 오피스 실 생성 자동화 방법론 개발)

  • Song, Yoan;Jang, Jae Young;Cha, Seung Hyun
    • Journal of KIBIM
    • /
    • v.14 no.3
    • /
    • pp.1-12
    • /
    • 2024
  • Many efforts are being made to enhance user productivity and promote collaboration while ensuring the economic efficiency of office buildings. Analyzing space utilization, indicating how users utilize spaces, has been a crucial factor in these efforts. Appropriate space utilization enhances building maintenance and space layout design, reducing unnecessary energy waste and under-occupied spaces. Recognizing the importance of space utilization, there have been several studies to predict space utilization using information about users, activities, and spaces. These studies suggested an ontology of the information and implemented automated activity-space mapping as part of space utilization prediction. Despite the existing studies, there remains a gap in integrating space utilization prediction with automated space layout design. As a foundational study to bridge this gap, our study proposes a novel methodology that automatically generates office rooms based on space utilization optimization. This methodology consists of three modules: Activity-space mapping, Space utilization calculation, and Room generation. The first two modules use data on space types and user activity types as input to calculate and optimize space utilization through requirement-based activity-space mapping. After optimizing the space utilization value within an appropriate range, the number and area of each space type are determined. The Room generation module then automatically generates rooms with optimized areas and numbers. The practical application of the developed methodology is demonstrated, highlighting its effectiveness in fabricated case scenario. By automatically generating rooms with optimal space utilization, our methodology shows potential for expanding to automated generation of optimized space layout design based on space utilization.