• Title/Summary/Keyword: large-scale RDF data

Search Result 10, Processing Time 0.026 seconds

A Dynamic Partitioning Scheme for Distributed Storage of Large-Scale RDF Data (대규모 RDF 데이터의 분산 저장을 위한 동적 분할 기법)

  • Kim, Cheon Jung;Kim, Ki Yeon;Yoo, Jong Hyeon;Lim, Jong Tae;Bok, Kyoung Soo;Yoo, Jae Soo
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1126-1135
    • /
    • 2014
  • In recent years, RDF partitioning schemes have been studied for the effective distributed storage and management of large-scale RDF data. In this paper, we propose an RDF dynamic partitioning scheme to support load balancing in dynamic environments where the RDF data is continuously inserted and updated. The proposed scheme creates clusters and sub-clusters according to the frequency of the RDF data used by queries to set graph partitioning criteria. We partition the created clusters and sub-clusters by considering the workloads and data sizes for the servers. Therefore, we resolve the data concentration of a specific server, resulting from the continuous insertion and update of the RDF data, in such a way that the load is distributed among servers in dynamic environments. It is shown through performance evaluation that the proposed scheme significantly improves the query processing time over the existing scheme.

Structural Change Detection Technique for RDF Data in MapReduce (맵리듀스에서의 구조적 RDF 데이터 변경 탐지 기법)

  • Lee, Taewhi;Im, Dong-Hyuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.293-298
    • /
    • 2014
  • Detecting and understanding the changes between RDF data is crucial in the evolutionary process, synchronization system, and versioning system on the web of data. However, current researches on detecting changes still remain unsatisfactory in that they did neither consider the large scale of RDF data nor accurately produce the RDF deltas. In this paper, we propose a scalable and effective change detection using a MapReduce framework which has been used in many fields to process and analyze large volumes of data. In particular, we focus on the structure-based change detection that adopts a strategy for the comparison of blank nodes in RDF data. To achieve this, we employ a method which is composed of two MapReduce jobs. First job partitions the triples with blank nodes by grouping each triple with the same blank node ID and then computes the incoming path to the blank node. Second job partitions the triples with the same path and matchs blank nodes with the Hungarian method. In experiments, we show that our approach is more accurate and effective than the previous approach.

Conversion of Large RDF Data using Hash-based ID Mapping Tables with MapReduce Jobs (맵리듀스 잡을 사용한 해시 ID 매핑 테이블 기반 대량 RDF 데이터 변환 방법)

  • Kim, InA;Lee, Kyu-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.236-239
    • /
    • 2021
  • With the growth of AI technology, the scale of Knowledge Graphs continues to be expanded. Knowledge Graphs are mainly expressed as RDF representations that consist of connected triples. Many RDF storages compress and transform RDF triples into the condensed IDs. However, if we try to transform a large scale of RDF triples, it occurs the high processing time and memory overhead because it needs to search the large ID mapping table. In this paper, we propose the method of converting RDF triples using Hash-based ID mapping tables with MapReduce, which is the software framework with a parallel, distributed algorithm. Our proposed method not only transforms RDF triples into Integer-based IDs, but also improves the conversion speed and memory overhead. As a result of our experiment with the proposed method for LUBM, the size of the dataset is reduced by about 3.8 times and the conversion time was spent about 106 seconds.

  • PDF

An Efficient Indexing Scheme Considering the Characteristics of Large Scale RDF Data (대규모 RDF 데이터의 특성을 고려한 효율적인 색인 기법)

  • Kim, Kiyeon;Yoon, Jonghyeon;Kim, Cheonjung;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.9-23
    • /
    • 2015
  • In this paper, we propose a new RDF index scheme considering the characteristics of large scale RDF data to improve the query processing performance. The proposed index scheme creates a S-O index for subjects and objects since the subjects and objects of RDF triples are used redundantly. In order to reduce the total size of the index, it constructs a P index for the relatively small number of predicates in RDF triples separately. If a query contains the predicate, we first searches the P index since its size is relatively smaller compared to the S-O index. Otherwise, we first searches the S-O index. It is shown through performance evaluation that the proposed scheme outperforms the existing scheme in terms of the query processing time.

Efficient Storing and SPARQL Search Scheme for Large Scale RDF Data (대용량 RDF 데이터의 효율적인 저장방법과 SPARQL 기반 검색방안 연구)

  • Oh, Sangyoon;Park, Ji-Hoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.195-197
    • /
    • 2016
  • 시멘틱웹을 구축하는 표준언어인 RDF (Resource Description Framework)는 언어의 그래프 기반 특성으로 인해 일반적인 방식들로는 효과적인 저장과 추출이 어렵다. 더욱이 대용량 RDF 데이터의 저장과 추출에는 성능문제가 더욱 커지므로 많은 연구들이 이루어지고 있다. 본 논문에서는 SPARQL을 지원하면서 RDF 파일들을 효과적으로 저장하고 검색할 수 있는 저장방식에 대해 연구한 결과를 제시한다. RDF 데이터를 전처리를 통해 RDF의 트리플(주어:subject, 술어:property, 목적어:Object)에서 중복되는 주어(S)나 목적어(O)를 묶고, 사용자가 SPARQL 형식으로 검색했을 때 이용자가 주어부분을 변수로 두었는지 아니면 서술어 부분을 변수로 두어 찾는지에 따라 검색어와 유사한 단어 클러스터를 찾아준다. 동일 단어에 대해 여러 번 검색되던 부분을 한 번 검색으로 처리할 수 있기 때문에 효율이 높아진다.

  • PDF

A Large-scale RDF Storage and Retrieval System for Linked Data (링크드 데이터를 위한 대용량 RDF 저장 및 검색 시스템)

  • Lee, Yong-Ju
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.523-524
    • /
    • 2016
  • 본 논문에서는 링크드 데이터를 위한 대용량 RDF 저장 및 검색 시스템을 제안한다. 현재 링크드 데이터에 대한 핵심 이슈는 링크드 데이터의 효율적인 저장과 검색, 그리고 활용 애플리케이션 개발이다. 제안 시스템은 저장 관리자, 인덱스 구조, 그리고 질의 처리기로 구성되어 있다. 저장 관리자는 대용량 RDF 데이터를 처리하기 위해 그래프 데이터베이스에 데이터를 분산 저장하며, 인덱스 구조는 다차원 히스토그램, 보조 인덱싱, 그리고 그래프 인덱싱 기법이 구현된다. 질의 처리기는 SPARQL 또는 NoSQL 질의를 사용하여 질의 최적화 및 랭킹기법이 적용된 RDF 트리플 검색을 수행한다.

Real-time and Parallel Semantic Translation Technique for Large-Scale Streaming Sensor Data in an IoT Environment (사물인터넷 환경에서 대용량 스트리밍 센서데이터의 실시간·병렬 시맨틱 변환 기법)

  • Kwon, SoonHyun;Park, Dongwan;Bang, Hyochan;Park, Youngtack
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.54-67
    • /
    • 2015
  • Nowadays, studies on the fusion of Semantic Web technologies are being carried out to promote the interoperability and value of sensor data in an IoT environment. To accomplish this, the semantic translation of sensor data is essential for convergence with service domain knowledge. The existing semantic translation technique, however, involves translating from static metadata into semantic data(RDF), and cannot properly process real-time and large-scale features in an IoT environment. Therefore, in this paper, we propose a technique for translating large-scale streaming sensor data generated in an IoT environment into semantic data, using real-time and parallel processing. In this technique, we define rules for semantic translation and store them in the semantic repository. The sensor data is translated in real-time with parallel processing using these pre-defined rules and an ontology-based semantic model. To improve the performance, we use the Apache Storm, a real-time big data analysis framework for parallel processing. The proposed technique was subjected to performance testing with the AWS observation data of the Meteorological Administration, which are large-scale streaming sensor data for demonstration purposes.

Semantic-based Mashup Platform for Contents Convergence

  • Yongju Lee;Hongzhou Duan;Yuxiang Sun
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.34-46
    • /
    • 2023
  • A growing number of large scale knowledge graphs raises several issues how knowledge graph data can be organized, discovered, and integrated efficiently. We present a novel semantic-based mashup platform for contents convergence which consists of acquisition, RDF storage, ontology learning, and mashup subsystems. This platform servers a basis for developing other more sophisticated applications required in the area of knowledge big data. Moreover, this paper proposes an entity matching method using graph convolutional network techniques as a preliminary work for automatic classification and discovery on knowledge big data. Using real DBP15K and SRPRS datasets, the performance of our method is compared with some existing entity matching methods. The experimental results show that the proposed method outperforms existing methods due to its ability to increase accuracy and reduce training time.

SSQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark SQL (SSQUSAR : Apache Spark SQL을 이용한 대용량 정성 공간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.2
    • /
    • pp.103-116
    • /
    • 2017
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner, which can derive new qualitative spatial knowledge representing both topological and directional relationships between two arbitrary spatial objects in efficient way using Aparch Spark SQL. Apache Spark SQL is well known as a distributed parallel programming environment which provides both efficient join operations and query processing functions over a variety of data in Hadoop cluster computer systems. In our spatial reasoner, the overall reasoning process is divided into 6 jobs such as knowledge encoding, inverse reasoning, equal reasoning, transitive reasoning, relation refining, knowledge decoding, and then the execution order over the reasoning jobs is determined in consideration of both logical causal relationships and computational efficiency. The knowledge encoding job reduces the size of knowledge base to reason over by transforming the input knowledge of XML/RDF form into one of more precise form. Repeat of the transitive reasoning job and the relation refining job usually consumes most of computational time and storage for the overall reasoning process. In order to improve the jobs, our reasoner finds out the minimal disjunctive relations for qualitative spatial reasoning, and then, based upon them, it not only reduces the composition table to be used for the transitive reasoning job, but also optimizes the relation refining job. Through experiments using a large-scale benchmarking spatial knowledge base, the proposed reasoner showed high performance and scalability.

SPARQL Query Processing in Distributed In-Memory System (분산 메모리 시스템에서의 SPARQL 질의 처리)

  • Jagvaral, Batselem;Lee, Wangon;Kim, Kang-Pil;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1109-1116
    • /
    • 2015
  • In this paper, we propose a query processing approach that uses the Spark functional programming and distributed memory system to solve the computational overhead of SPARQL. In the semantic web, RDF ontology data is produced at large scale, and the main challenge for the semantic web is to query and manipulate such a large ontology with a high throughput. The most existing studies on SPARQL have focused on deploying the Hadoop MapReduce framework, and although approaches based on Hadoop MapReduce have shown promising results, they achieve a low level of throughput due to the underlying distributed file processes. Therefore, in order to speed up the query processes, we suggest query- processing methods that are based on memory caching in distributed memory system. Our approach is also integrated with a clause unification method for propagating between the clauses that exploits Spark join, map and filter methods along with caching. In our experiments, we have achieved a high level of performance relative to other approaches. In particular, our performance was nearly similar to that of Sempala, which has been considered to be the fastest query processing system.