• Title/Summary/Keyword: Ontology Schema

Search Result 101, Processing Time 0.022 seconds

A Study on the Thesaurus-based Ontology System for the Semantic Web (시소러스를 기반으로 한 온톨로지 시스템 구현에 관한 연구)

  • Jeong, Do-Heon;Kim, Tae-Su
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.3
    • /
    • pp.155-175
    • /
    • 2003
  • The purpose of the study was to construct a system based on the semantic web environment's ontology by utilizing the ontology schema derived from the facet-type Art and Architecture Thesaurus(AAT). The aforementioned ontology schema is based on the Web Ontology Language(OWL), which is being widely considered the standard ontology language for the W3C-centered semantic web environment. Also, the concepts were limited to terms within AAT'S Furniture Facet, and the system was tested using the Chair concept, which is a lower-level facet that has a diverse conceptual relationship and broad vocabulary base. The ontology system is capable of searching for concepts, while controlling the search results by always providing a 'Preferred term' for synonymous terms. In addition, the system provides the user with first, a relationship between the terms centered around the inquiry, and second, related terms along with their classification properties. Also, the system is presented as and application example of the ontology system that constructs a information system that intakes an Instance value and reproduces it into a RDF file. During this process, utilization of multiple ontologies was introduced, and the stored Instance value's meta-data elements were used.

A Measurement for the Degree of Semantic Relationship Between Two Instances Based on Context (컨텍스트에 기반한 두 인스턴스 사이의 의미 관계 정도 측정)

  • Han, Yong-Jin;Park, Se-Young;Park, Seong-Bae;Kim, Kweon-Yang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.672-678
    • /
    • 2008
  • Entities in reality have direct relationships between each other. They also have new and indirect relationships through such direct relationships. An ontology gives explicit meaning of such relationships. Thus, we can discover new relationships between entities based on an ontology. Such new relationships are applied in indentifying new communities or constructing social networks. Measuring for the degree of relationships is an important problem in such domains. This paper proposes a measurement for the degree of relationships between entities based on an ontology. Most of researches are based on connected paths between entities. However, there are meaningful relationships between two entities through the schema in an ontology even through there are no connected paths between the entities. The proposed method measures for the degree of relationships between two entities not based on connected paths, but also relationships through the schema. The experiment result shows that the relationships through the schema are meaningful to measure the degree of relationships between entities.

Storing Scheme based on Graph Data Model for Managing RDF/S Data (RDF/S 데이터의 관리를 위한 그래프 데이터 모델 기반 저장 기법)

  • Kim, Youn-Hee;Choi, Jae-Yeon;Lim, Hae-Chull
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.285-293
    • /
    • 2008
  • In Semantic Web, metadata and ontology for representing semantics and conceptual relationships of information resources are essential factors. RDF and RDF Schema are W3C standard models for describing metadata and ontology. Therefore, many studies to store and retrieve RDF and RDF Schema documents are required. In this paper, we focus on some results of analyzing available query patterns considering both RDF and RDF Schema and classify queries on RDF and RDF Schema into the three patterns. RDF and RDF Schema can be represented as graph models. So, we proposed some strategies to store and retrieve using the graph models of RDF and RDF Schema. We can retrieve entities that can be arrived from a certain class or property in RDF and RDF Schema without a loss of performance on account of multiple joins with tables.

  • PDF

A Study of Dynamic Web Ontology for Comparison-shopping Agent based on Semantic Web (시멘틱 웹 기반의 비교구매 에이전트를 위한 동적 웹 온톨로지에 대한 연구)

  • Kim, Su-Kyoung;Ahn, Ki-Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.31-45
    • /
    • 2005
  • In this paper, convert in RDF triple and a RDF document through RDF document converters and design metadata schema about a digital camcorder after use Wrapper technology, and acquiring commodity information of a HTML page about the digital camcorder which these papers are defined so as to be different by electronic commerce stores, and is expressed. Save in digital camcorder domain ontology storage that implemented to relational database to DCC knowledge base ontology as convert to OWL Web ontology based on designed metadata schema. Through compare with rdf and DCCKBO, mapping, and inference process, provide to buyers by DCC information of the store that had the commodity purchasing information which is the best, and proposed a dynamic Web ontology guessed to contents of the best commodity purchasing information, and to define domain ontology saved in DCCKBO.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Designing Schemes to Associate Basic Semantics Register with RDF/OWL (기본의미등록기의 RDF/OWL 연계방안에 관한 연구)

  • Oh, Sam-Gyun
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.3
    • /
    • pp.241-259
    • /
    • 2003
  • The Basic Semantic Register(BSR) is and official ISO register designed for interoperability among eBusiness and EDI systems. The entities registered in the current BSR are not defined in a machine-understandable way, which renders automatic extraction of structural and relationship information from the register impossible. The purpose of this study is to offer a framework for designing an ontology that can provide semantic interoperability among BSR-based systems by defining data structures and relationships with RDF and OWL, similar meaning by the 'equivalentClass' construct in OWL, the hierachical relationships among classes by the 'subClassOf' construct in RDF schema, definition of any entities in BSR by the 'label' construct in RDF schema, specification of usage guidelines by the 'comment' construct in RDF schema, assignment of classes to BSU's by the 'domain' construct in RDF schema, specification of data types of BSU's by the 'range' construct in RDF schema. Hierarchical relationships among properties in BSR can be expressed using the 'subPropertyOf' in RDF schema. Progress in semantic interoperability can be expected among BSR-based systems through applications of semantic web technology suggested in this study.

An Enhanced Concept Search Method for Ontology Schematic Reasoning (온톨로지 스키마 추론을 위한 향상된 개념 검색방법)

  • Kwon, Soon-Hyun;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.928-935
    • /
    • 2009
  • Ontology schema reasoning is used to maintain consistency of concepts and build concept hierarchy automatically. For the purpose, the search of concepts must be inevitably performed. Ontology schema reasoning performs the test of subsumption relationships of all the concepts delivered in the test set. The result of subsumption tests is determined based on the creation of complete graphs, which seriously weighs with the performance of reasoning. In general, the process of creating complete graph has been known as expressive procedure. This process is essential in improving the leading performance. In this paper, we propose a method enhancing the classification performance by identifying unnecessary subsumption test supported by optimized searching method on subsumption relationship test among concepts. It is achieved by propagating subsumption tests results into other concept.

Similarity measure for P2P processing of semantic data (시맨틱웹 데이터의 P2P 처리를 위한 유사도 측정)

  • Kim, Byung Gon;Kim, Youn Hee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.4
    • /
    • pp.11-20
    • /
    • 2010
  • Ontology is important role in semantic web to construct and query semantic data. Because of dynamic characteristic of ontology, P2P environment is considered for ontology processing in web environment. For efficient processing of ontology in P2P environment, clustering of peers should be considered. When new peer is added to the network, cluster allocation problem of the new peer is important for system efficiency. For clustering of peers with similar chateristics, similarlity measure method of ontology in added peer with ontologies in other clusters is needed. In this paper, we propose similarity measure techniques of ontologies for clustering of peers. Similarity measure method in this paper considered ontology's strucural characteristics like schema, class, property. Results of experiments show that ontologies of similar topics, class, property can be allocated to the same cluster.

A Methodology for Searching Frequent Pattern Using Graph-Mining Technique (그래프마이닝을 활용한 빈발 패턴 탐색에 관한 연구)

  • Hong, June Seok
    • Journal of Information Technology Applications and Management
    • /
    • v.26 no.1
    • /
    • pp.65-75
    • /
    • 2019
  • As the use of semantic web based on XML increases in the field of data management, a lot of studies to extract useful information from the data stored in ontology have been tried based on association rule mining. Ontology data is advantageous in that data can be freely expressed because it has a flexible and scalable structure unlike a conventional database having a predefined structure. On the contrary, it is difficult to find frequent patterns in a uniformized analysis method. The goal of this study is to provide a basis for extracting useful knowledge from ontology by searching for frequently occurring subgraph patterns by applying transaction-based graph mining techniques to ontology schema graph data and instance graph data constituting ontology. In order to overcome the structural limitations of the existing ontology mining, the frequent pattern search methodology in this study uses the methodology used in graph mining to apply the frequent pattern in the graph data structure to the ontology by applying iterative node chunking method. Our suggested methodology will play an important role in knowledge extraction.

Automated Modelling of Ontology Schema for Media Classification (미디어 분류를 위한 온톨로지 스키마 자동 생성)

  • Lee, Nam-Gee;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.287-294
    • /
    • 2017
  • With the personal-media development that has emerged through various means such as UCC and SNS, many media studies have been completed for the purposes of analysis and recognition, thereby improving the object-recognition level. The focus of these studies is a classification of media that is based on a recognition of the corresponding objects, rather than the use of the title, tag, and scripter information. The media-classification task, however, is intensive in terms of the consumption of time and energy because human experts need to model the underlying media ontology. This paper therefore proposes an automated approach for the modeling of the media-classification ontology schema; here, the OWL-DL Axiom that is based on the frequency of the recognized media-based objects is considered, and the automation of the ontology modeling is described. The authors conducted media-classification experiments across 15 YouTube-video categories, and the media-classification accuracy was measured through the application of the automated ontology-modeling approach. The promising experiment results show that 1500 actions were successfully classified from 15 media events with an 86 % accuracy.