• Title/Summary/Keyword: ontology development

Search Result 400, Processing Time 0.025 seconds

National Defense Domain Ontology Development Using Mixed Ontology Building Methodology (MOBM) (혼합형 온톨로지 구축방법론을 이용한 국방온톨로지 구축)

  • Ra, Minyoung;Yoo, Donghee;No, Sungchun;Shin, Jinhee;Han, Changhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.279-282
    • /
    • 2012
  • 본 연구에서는 혼합형 온톨로지 구축방법론을 이용하여 ATCIS 체계에 활용 가능한 국방온톨로지의 구축 과정을 보여주고자 한다. 이를 위해, 실제 ATCIS의 데이터베이스 정보들을 활용하였고 해당 방법론이 ATCIS 체계에 적용될 때 추가적으로 고려해야 할 사항들을 함께 분석하였다. 이러한 연구 결과는 향후 보다 실용적인 국방온톨로지 구축을 위한 기반 자료로 활용될 것으로 기대된다.

Design and Construction of a NLP Based Knowledge Extraction Methodology in the Medical Domain Applied to Clinical Information

  • Moreno, Denis Cedeno;Vargas-Lombardo, Miguel
    • Healthcare Informatics Research
    • /
    • v.24 no.4
    • /
    • pp.376-380
    • /
    • 2018
  • Objectives: This research presents the design and development of a software architecture using natural language processing tools and the use of an ontology of knowledge as a knowledge base. Methods: The software extracts, manages and represents the knowledge of a text in natural language. A corpus of more than 200 medical domain documents from the general medicine and palliative care areas was validated, demonstrating relevant knowledge elements for physicians. Results: Indicators for precision, recall and F-measure were applied. An ontology was created called the knowledge elements of the medical domain to manipulate patient information, which can be read or accessed from any other software platform. Conclusions: The developed software architecture extracts the medical knowledge of the clinical histories of patients from two different corpora. The architecture was validated using the metrics of information extraction systems.

The Development of Subject Gateway and Library Operating Model for the Diffusion of Entrepreneurship

  • Park, Ok Nam
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.1
    • /
    • pp.439-467
    • /
    • 2021
  • While the body of cases on startup has grown substantially, there has been a lack of a one-stop gateway of entrepreneurship. The study attempts to build a subject gateway for startup information services based on case studies, users' needs analysis, and literature reviews. The results show that users have difficulty in selecting useful information since the excess of information and the search for the desired information as it is scattered across a wide range of sources. The study designed a subject gateway by a navigation system that enables flexible browsing within the entire gateway through the ontology modeling. The study also presented an example of startup records to display how startup information can be explored. This study is expected to contribute to the understanding of the current status related to business startup services. The business startup digital gateways based on empirical data analysis will contribute to extending library service for startup.

Syntactic and semantic information extraction from NPP procedures utilizing natural language processing integrated with rules

  • Choi, Yongsun;Nguyen, Minh Duc;Kerr, Thomas N. Jr.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.866-878
    • /
    • 2021
  • Procedures play a key role in ensuring safe operation at nuclear power plants (NPPs). Development and maintenance of a large number of procedures reflecting the best knowledge available in all relevant areas is a complex job. This paper introduces a newly developed methodology and the implemented software, called iExtractor, for the extraction of syntactic and semantic information from NPP procedures utilizing natural language processing (NLP)-based technologies. The steps of the iExtractor integrated with sets of rules and an ontology for NPPs are described in detail with examples. Case study results of the iExtractor applied to selected procedures of a U.S. commercial NPP are also introduced. It is shown that the iExtractor can provide overall comprehension of the analyzed procedures and indicate parts of procedures that need improvement. The rich information extracted from procedures could be further utilized as a basis for their enhanced management.

Simulation of solar radiation and wind events in the virtual environments (가상 환경에서 태양 복사와 바람 현상의 논리적 시뮬레이션 방법)

  • Cho, Jin-Young;Park, Jong-Hee
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.785-794
    • /
    • 2003
  • Computer simulation of natural phenomena has been inclined to graphic processing for visual reality. This negligence of cosmic causalities in their occurrence and natural laws in their development should lead to limited degree of immersion to the users. We attempt to develop a logical framework for authentic simulation of diverse, unpredictable occurrence and development of natural phenomena (such as solar radiation and wind) based on their associated inherent laws and principles. To this end we structure the relevant objects organized in an ontology and propose a data management method. Then we describe our simulation method for the natural phenomena as delimited in phases and present modeling techniques for qualitative changes of physical objects due to their factors' values beyond normal ranges.

The Basic Concepts Classification as a Bottom-Up Strategy for the Semantic Web

  • Szostak, Rick
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.4 no.1
    • /
    • pp.39-51
    • /
    • 2014
  • The paper proposes that the Basic Concepts Classification (BCC) could serve as the controlled vocabulary for the Semantic Web. The BCC uses a synthetic approach among classes of things, relators, and properties. These are precisely the sort of concepts required by RDF triples. The BCC also addresses some of the syntactic needs of the Semantic Web. Others could be added to the BCC in a bottom-up process that carefully evaluates the costs, benefits, and best format for each rule considered.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Development and Management of an Ontology and Registries for Sharing Metadata about Business Process Specifications under SOA (서비스 지향 아키텍쳐 하에서 비즈니스 프로세스 명세에 관한 메타 데이터를 공유하기 위한 온톨로지와 등록저장소의 개발 및 관리 방안)

  • Kim, Hyoung-Do;Kim, Jong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.11
    • /
    • pp.9-22
    • /
    • 2007
  • While standardization and its applications for registering and sharing information resources about B2B transactions such as business documents are relatively well developed, it is not easy to register and share business process resources because there are many ways to define complex business processes using different specification(definition) languages. In practice, there are several competing business process specification languages applicable under service-oriented architecture (SOA) such as ebXML BPSS, WS-BPEL, BPMN and so on. A systematic way has to be prepared to register/share diverse and heterogeneous specifications represented using those languages. This paper demonstrates the usefulness of sharing B2B business processes by prototyping a business process registry called ebRR4BP. First of all, we designed a metadata ontology to support the registration of diverse B2B business processes. To implement the proposed metadata ontology using ebXML registries, a mapping scheme to ebXML Registry Information Model is also suggested. The ontology and mapping scheme will be a foundation for supporting common interchange of business process metadata among B2B resistries.

A Study on Marine Accident Ontology Development and Data Management: Based on a Situation Report Analysis of Southwest Coast Marine Accidents in Korea (해양사고 온톨로지 구축 및 데이터 관리방안 연구: 서해남부해역 선박사고 상황보고서 분석을 중심으로)

  • Lee, Young Jai;Kang, Seong Kyung;Gu, Ja-Yeong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.423-432
    • /
    • 2019
  • Along with an increase in marine activities every year, the frequency of marine accidents is on the rise. Accordingly, various research activities and policies for marine safety are being implemented. Despite these efforts, the number of accidents are increasing every year, bringing their effectiveness into question. Preliminary studies relying on annual statistical reports provide precautionary measures for items that stand out significantly, through the comparison of statistical provision items. Since the 2000s, large-scale marine accidents have repeatedly occurred, and case studies have examined the "accident response." Likewise, annual statistics or accident cases are used as core data in policy formulation for domestic maritime safety. However, they are just a summary of post-accident results. In this study, limitations of current marine research and policy are evaluated through a literature review of case studies and analyses of marine accidents. In addition, the ontology of the marine accident information classification system will be revised to improve the current limited usage of the information through an attribute analysis of boating accident status reports and text mining. These aspects consist of the reporter, the report method, the rescue organization, corrective measures, vulnerability of response, payloads, cause of oil spill, damage pattern, and the result of an accident response. These can be used consistently in the future as classified standard terms to collect and utilize information more efficiently. Moreover, the research proposes a data collection and quality assurance method for the practical use of ontology. A clear understanding of the problems presently faced in marine safety will allow "suf icient quality information" to be leveraged for the purpose of conducting various researches and realizing effective policies.