• Title/Summary/Keyword: MapReduce Framework

Search Result 100, Processing Time 0.028 seconds

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

Processing large-scale data with Apache Spark (Apache Spark를 활용한 대용량 데이터의 처리)

  • Ko, Seyoon;Won, Joong-Ho
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1077-1094
    • /
    • 2016
  • Apache Spark is a fast and general-purpose cluster computing package. It provides a new abstraction named resilient distributed dataset, which is capable of support for fault tolerance while keeping data in memory. This type of abstraction results in a significant speedup compared to legacy large-scale data framework, MapReduce. In particular, Spark framework is suitable for iterative machine learning applications such as logistic regression and K-means clustering, and interactive data querying. Spark also supports high level libraries for various applications such as machine learning, streaming data processing, database querying and graph data mining thanks to its versatility. In this work, we introduce the concept and programming model of Spark as well as show some implementations of simple statistical computing applications. We also review the machine learning package MLlib, and the R language interface SparkR.

Design and Implementation of Big Data Platform for Image Processing in Agriculture (농업 이미지 처리를 위한 빅테이터 플랫폼 설계 및 구현)

  • Nguyen, Van-Quyet;Nguyen, Sinh Ngoc;Vu, Duc Tiep;Kim, Kyungbaek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.50-53
    • /
    • 2016
  • Image processing techniques play an increasingly important role in many aspects of our daily life. For example, it has been shown to improve agricultural productivity in a number of ways such as plant pest detecting or fruit grading. However, massive quantities of images generated in real-time through multi-devices such as remote sensors during monitoring plant growth lead to the challenges of big data. Meanwhile, most current image processing systems are designed for small-scale and local computation, and they do not scale well to handle big data problems with their large requirements for computational resources and storage. In this paper, we have proposed an IPABigData (Image Processing Algorithm BigData) platform which provides algorithms to support large-scale image processing in agriculture based on Hadoop framework. Hadoop provides a parallel computation model MapReduce and Hadoop distributed file system (HDFS) module. It can also handle parallel pipelines, which are frequently used in image processing. In our experiment, we show that our platform outperforms traditional system in a scenario of image segmentation.

A Study on Performance Improvement of Distributed Computing Framework using GPU (GPU를 활용한 분산 컴퓨팅 프레임워크 성능 개선 연구)

  • Song, Ju-young;Kong, Yong-joon;Shim, Tak-kil;Shin, Eui-seob;Seong, Kee-kin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.499-502
    • /
    • 2012
  • 빅 데이터 분석의 시대가 도래하면서 대용량 데이터의 특성과 계산 집약적 연산의 특성을 동시에 가지는 문제 해결에 대한 요구가 늘어나고 있다. 대용량 데이터 처리의 경우 각종 분산 파일 시스템과 분산/병렬 컴퓨팅 기술들이 이미 많이 사용되고 있으며, 계산 집약적 연산 처리의 경우에도 GPGPU 활용 기술의 발달로 보편화되는 추세에 있다. 하지만 대용량 데이터와 계산 집약적 연산 이 두 가지 특성을 모두 가지는 문제를 처리하기 위해서는 많은 제약 사항들을 해결해야 하는데, 본 논문에서는 이에 대한 대안으로 분산 컴퓨팅 프레임워크인 Hadoop MapReduce와 Nvidia의 GPU 병렬 컴퓨팅 아키텍처인 CUDA 흘 연동하는 방안을 제시하고, 이를 밀집행렬(dense matrix) 연산에 적용했을 때 얻을 수 있는 성능 개선 효과에 대해 소개하고자 한다.

An Efficient Parallel Construction Scheme of An R-Tree using Hadoop (Hadoop을 이용한 R-트리의 효율적인 병렬 구축 기법)

  • Cong, Viet-Ngu Huynh;Kim, Jongmin;Kwon, Oh-Heum;Song, Ha-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.231-241
    • /
    • 2019
  • Bulk-loading an R-tree can be a good approach to build an efficient one. However, it takes a lot of time to bulk-load an R-tree for huge amount of data. In this paper, we propose a parallel R-tree construction scheme based on a Hadoop framework. The proposed scheme divides the data set into a number of partitions for which local R-trees are built in parallel via Map-Reduce operations. Then the local R-trees are merged into an global R-tree that covers the whole data set. While generating the partitions, it considers the spatial distribution of the data into account so that each partition has nearly equal amounts of data. Therefore, the proposed scheme gives an efficient index structure while reducing the construction time. Experimental tests show that the proposed scheme builds an R-tree more efficiently than the existing approaches.

Implementation of a Raspberry-Pi-Sensor Network (라즈베리파이 센서 네트워크 구현)

  • Moon, Sangook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.915-916
    • /
    • 2014
  • With the upcoming era of internet of things, the study of sensor network has been paid attention. Raspberry pi is a tiny versatile computer system which is able to act as a sensor node in hadoop cluster network. In this paper, we deployed 5 Raspberry pi's to construct an experimental testbed of hadoop sensor network with 5-node map-reduce hadoop software framework. We compared and analyzed the network architecture in terms of efficiency, resource management, and throughput using various parameters. We used a learning machine with support vector machine as test workload. In our experiments, Raspberry pi fulfilled the role of distributed computing sensor node in the sensor network.

  • PDF

Distributed Support Vector Machines for Localization on a Sensor Newtork (센서 네트워크에서 위치 측정을 위한 분산 지지 벡터 머신)

  • Moon, Sangook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.944-946
    • /
    • 2014
  • Localization of a sensor network node using machine learning has been recently studied. It is easy for Support vector machines algorithm to implement in high level language enabling parallelism. In this paper, we realized Support vector machine using python language and built a sensor network cluster with 5 Pi's. We also established a Hadoop software framework to employ MapReduce mechanism. We modified the existing Support vector machine algorithm to fit into the distributed hadoop architecture system for localization of a sensor node. In our experiment, we implemented the test sensor network with a variety of parameters and examined based on proficiency, resource evaluation, and processing time.

  • PDF

Implementaion of Video Processing Framework using Hadoop-based cloud computing (Hadoop 기반 클라우드 컴퓨팅을 이용한 영상 처리 프레임워크 구현)

  • Ryu, Chungmo;Lee, Daecheol;Jang, Minwook;Kim, Cheolgi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.139-142
    • /
    • 2013
  • 최근 대용량 영상데이터로부터 정보 수집, 영상 처리를 위한 클라우드 관련 연구들이 활발하다. 그러나 공개 소프트웨어를 이용한 클라우드 연구의 대부분은 라이브러리 수준이 아닌 단순히 프로그램 수준의 조합으로 작동한다. 이런 이유로 단순 조합에 따른 비효율성에 의한 성능문제는 크게 다루어지지 않는다. 본 논문에서는 이 비효율성을 해결하는데 중점을 두고 FFmpeg과 Hadoop을 라이브러리 수준으로 결합하여 기존보다 더 나은 성능의 영상클라우드 환경을 구축하였다. C기반의 영상처리 라이브러리인 FFmpeg와 JAVA기반의 클라우드 환경 Hadoop의 결합을 위해 JNI(Java Native Interface)를 이용하였다. 상세구현으로는 HDFS(Hadoop Distributed File System)을 확장하여 Hadoop MapReduce가 직접 FFmpeg을 통한 영상파일 접근이 가능하게 하였다. 이로써 FFmpeg과 Hadoop간 상이한 파일 접근 방식에서 발생하는 불필요한 작업에 의한 시스템의 성능저하를 막았다. 또한 응용의 확장성을 위해 영상작업시 작업영상을 영상처리의 최소단위인 GOP(Group of Pictures)단위로 잘라 클라우드의 노드들에게 분산시켰다. 결과적으로 기존에 존재하는 Hadoop과 FFmpeg을 프로그램적으로 결합한 영상처리 클라우드보다 총 처리시간을 앞당겼고, GOP 단위의 영상 처리는 영상기반 작업에 안정성과 응용의 확장성을 보장해주었다.

A Plan of Spatial Data Modeling for Tidal Power Energy Development (조력에너지 개발을 위한 공간데이터 모델링 방안)

  • Oh, Jung-Hee;Choi, Hyun-Woo;Park, Jin-Soon;Lee, Kwang-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.14 no.3
    • /
    • pp.22-35
    • /
    • 2011
  • Incheon Bay has a suitable condition for tidal power generation due to the high tidal range by topographical effect. Therefore a study on the technology development for tidal energy utilization has been promoted since 2006. It is needed to deduce optimal alternatives to determine the suitable location of facilities for tidal power generation and to reduce the environmental damage from development. In order to carry out efficiently this mission, spatial information system is essential to manage and use various spacial elements related to the development and conservation. In this study, for the development of tidal energy, spatial data could be defined as three kinds of dataset. Fundamental dataset is defined as spatial data such as tide, tidal current, wave, erosion and sedimentation. Framework dataset is composed of topographical map, facility map and bathymetry. The reference dataset is composed of marine ecology and environment having the characteristics of thematic map. This study is mainly aimed at establishing methodology of conceptual spatial data modeling classifying as essential data model and optional data model through the definition of the components of spatial data.

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.