• Title/Summary/Keyword: RDFS

Search Result 56, Processing Time 0.023 seconds

An Optimization Technique for RDFS Inference the Applied Order of RDF Schema Entailment Rules (RDF 스키마 함의 규칙 적용 순서를 이용한 RDFS 추론 엔진의 최적화)

  • Kim, Ki-Sung;Yoo, Sang-Won;Lee, Tae-Whi;Kim, Hyung-Joo
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.151-162
    • /
    • 2006
  • RDF Semantics, one of W3C Recommendations, provides the RDFS entailment rules, which are used for the RDFS inference. Sesame, which is well known RDF repository, supports the RDBMS-based RDFS inference using the forward-chaining strategy. Since inferencing in the forward-chaining strategy is performed in the data loading time, the data loading time in Sesame is slow down be inferencing. In this paper, we propose the order scheme for applying the RDFS entailment rules to improve inference performance. The proposed application order makes the inference process terminate without repetition of the process for most cases and guarantees the completeness of inference result. Also the application order helps to reduce redundant results during the inference by predicting the results which were made already by previously applied rules. In this paper, we show that our approaches can improve the inference performance with comparisons to the original Sesame using several real-life RDF datasets.

Distributed Table Join for Scalable RDFS Reasoning on Cloud Computing Environment (클라우드 컴퓨팅 환경에서의 대용량 RDFS 추론을 위한 분산 테이블 조인 기법)

  • Lee, Wan-Gon;Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.674-685
    • /
    • 2014
  • The Knowledge service system needs to infer a new knowledge from indicated knowledge to provide its effective service. Most of the Knowledge service system is expressed in terms of ontology. The volume of knowledge information in a real world is getting massive, so effective technique for massive data of ontology is drawing attention. This paper is to provide the method to infer massive data-ontology to the extent of RDFS, based on cloud computing environment, and evaluate its capability. RDFS inference suggested in this paper is focused on both the method applying MapReduce based on RDFS meta table, and the method of single use of cloud computing memory without using MapReduce under distributed file computing environment. Therefore, this paper explains basically the inference system structure of each technique, the meta table set-up according to RDFS inference rule, and the algorithm of inference strategy. In order to evaluate suggested method in this paper, we perform experiment with LUBM set which is formal data to evaluate ontology inference and search speed. In case LUBM6000, the RDFS inference technique based on meta table had required 13.75 minutes(inferring 1,042 triples per second) to conduct total inference, whereas the method applying the cloud computing memory had needed 7.24 minutes(inferring 1,979 triples per second) showing its speed twice faster.

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Indexing Scheme for RDF/RDFS using Prime Number Label (소수 레이블을 이용한 RDF/RDFS 인덱스 구조)

  • Kim, Sun-Young;Kwon, Dong-Seop;Lee, Suk-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.82-84
    • /
    • 2005
  • 시맨틱 웹의 등장에 따라 RDF와 RDF Schema(RDF/RDFS)로 표현되는 웹 데이타의 양이 증가하고 있다. 이에 웹 데이타를 효율적으로 저장, 검색할 수 있는 인덱스 구조의 필요성이 높아지고 있다. 본 연구에서는 기존의 트리 모델을 위한 소수 레이블 기법(prime number labeling scheme)을 발전시켜, RDF/RDFS 인덱스 구조를 표현할 수 있는 그래프 모델을 위한 소수 레이블 기법을 제안한다. 제안한 기법은 기존의 소수 레이블 기법을 그래프에 적용하여 구조 질의(Structural Query)를 효율적으로 처리할 수 있고, 데이타 갱신 시에 인덱스를 재구성하지 않아도 되는 장점을 가지고 있다. 그리고 이전의 RDF/RDFS 인덱스 구조에서 효율적으로 처리하기 힘들었던 순환 방향성 그래프에 대한 질의도 쉴게 처리할 수 있다.

  • PDF

Efficient Reasoning Using View in DBMS-based Triple Store (DBMS기반 트리플 저장소에서 뷰를 이용한 효율적인 추론)

  • Lee, Seungwoo;Kim, Jae-Han;You, Beom-Jong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.74-78
    • /
    • 2009
  • Efficient reasoning has become important for improving the performance of ontology systems as the size of ontology grows. In this paper, we introduce a method that efficiently performs reasoning of RDFS entailment rules (i.e., rdfs7 and rdfs9 rules) and OWL inverse rule using views in the DBMS-based triple sotre. Reasoning is performed by replacing reasoning rules with the corresponding view definition and storing RDF triples into the structured triple tables. When processing queries, the views is referred instead of original tables. In this way, we can reduce the time needed for reasoning and also obtain the space-efficiency of the triple store.

  • PDF

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

Efficient Change Detection between RDF Models Using Backward Chaining Strategy (후방향 전진 추론을 이용한 RDF 모델의 효율적인 변경 탐지)

  • Im, Dong-Hyuk;Kim, Hyoung-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.125-133
    • /
    • 2009
  • RDF is widely used as the ontology language for representing metadata on the semantic web. Since ontology models the real-world, ontology changes overtime. Thus, it is very important to detect and analyze changes in knowledge base system. Earlier studies on detecting changes between RDF models focused on the structural differences. Some techniques which reduce the size of the delta by considering the RDFS entailment rules have been introduced. However, inferencing with RDF models increases data size and upload time. In this paper, we propose a new change detection using RDF reasoning that only computes a small part of the implied triples using backward chaining strategy. We show that our approach efficiently detects changes through experiments with real-life RDF datasets.

Knowledge Representation and Extraction of Biological Data using RDFS + OWL (RDFS + OWL을 이용한 생물학적 데이터의 지식 표현과 추출)

  • Lee Seung Hui;Sin Mun Su;Jeong Mu Yeong
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.1136-1141
    • /
    • 2003
  • Due to the lack of digitally usable standards, it has been known to be difficult to handle the biological data. For example, the name of genes and proteins changes over time or has several synonyms indicating different entities. To cope with these problems, several communities, including the Gene Ontology Consortium and PubGene are making their efforts to move science toward the semantic web vision. Although some progress has been made, its expressivity is not sufficient for full-fledged ontological modeling and reasoning. This paper suggests a methodology for representing and extracting biological knowledge by using Web Ontology Language (OWL) as an extension of Resource Description Framework Schema (RDFS). Some benefits of our approach are: (1) to ensure extended sharing of biological meta data on the Web, and (2) to enrich additional expressivity and the semantics of RDFS+OWL.

  • PDF

RDFS Rule based Parallel Reasoning Scheme for Large-Scale Streaming Sensor Data (대용량 스트리밍 센서데이터 환경에서 RDFS 규칙기반 병렬추론 기법)

  • Kwon, SoonHyun;Park, Youngtack
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.686-698
    • /
    • 2014
  • Recently, large-scale streaming sensor data have emerged due to explosive supply of smart phones, diffusion of IoT and Cloud computing technology, and generalization of IoT devices. Also, researches on combination of semantic web technology are being actively pushed forward by increasing of requirements for creating new value of data through data sharing and mash-up in large-scale environments. However, we are faced with big issues due to large-scale and streaming data in the inference field for creating a new knowledge. For this reason, we propose the RDFS rule based parallel reasoning scheme to service by processing large-scale streaming sensor data with the semantic web technology. In the proposed scheme, we run in parallel each job of Rete network algorithm, the existing rule inference algorithm and sharing data using the HBase, a hadoop database, as a public storage. To achieve this, we implement our system and evaluate performance through the AWS data of the weather center as large-scale streaming sensor data.

Spark based Scalable RDFS Ontology Reasoning over Big Triples with Confidence Values (신뢰값 기반 대용량 트리플 처리를 위한 스파크 환경에서의 RDFS 온톨로지 추론)

  • Park, Hyun-Kyu;Lee, Wan-Gon;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.87-95
    • /
    • 2016
  • Recently, due to the development of the Internet and electronic devices, there has been an enormous increase in the amount of available knowledge and information. As this growth has proceeded, studies on large-scale ontological reasoning have been actively carried out. In general, a machine learning program or knowledge engineer measures and provides a degree of confidence for each triple in a large ontology. Yet, the collected ontology data contains specific uncertainty and reasoning such data can cause vagueness in reasoning results. In order to solve the uncertainty issue, we propose an RDFS reasoning approach that utilizes confidence values indicating degrees of uncertainty in the collected data. Unlike conventional reasoning approaches that have not taken into account data uncertainty, by using the in-memory based cluster computing framework Spark, our approach computes confidence values in the data inferred through RDFS-based reasoning by applying methods for uncertainty estimating. As a result, the computed confidence values represent the uncertainty in the inferred data. To evaluate our approach, ontology reasoning was carried out over the LUBM standard benchmark data set with addition arbitrary confidence values to ontology triples. Experimental results indicated that the proposed system is capable of running over the largest data set LUBM3000 in 1179 seconds inferring 350K triples.