• Title/Summary/Keyword: 하둡, 스파크

Search Result 27, Processing Time 0.021 seconds

K Nearest Neighbor Joins for Big Data Processing based on Spark (Spark 기반 빅데이터 처리를 위한 K-최근접 이웃 연결)

  • JIAQI, JI;Chung, Yeongjee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1731-1737
    • /
    • 2017
  • K Nearest Neighbor Join (KNN Join) is a simple yet effective method in machine learning. It is widely used in small dataset of the past time. As the number of data increases, it is infeasible to run this model on an actual application by a single machine due to memory and time restrictions. Nowadays a popular batch process model called MapReduce which can run on a cluster with a large number of computers is widely used for large-scale data processing. Hadoop is a framework to implement MapReduce, but its performance can be further improved by a new framework named Spark. In the present study, we will provide a KNN Join implement based on Spark. With the advantage of its in-memory calculation capability, it will be faster and more effective than Hadoop. In our experiments, we study the influence of different factors on running time and demonstrate robustness and efficiency of our approach.

Parallel k-Modes Algorithm for Spark Framework (스파크 프레임워크를 위한 병렬적 k-Modes 알고리즘)

  • Chung, Jaehwa
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.10
    • /
    • pp.487-492
    • /
    • 2017
  • Clustering is a technique which is used to measure similarities between data in big data analysis and data mining field. Among various clustering methods, k-Modes algorithm is representatively used for categorical data. To increase the performance of iterative-centric tasks such as k-Modes, a distributed and concurrent framework Spark has been received great attention recently because it overcomes the limitation of Hadoop. Spark provides an environment that can process large amount of data in main memory using the concept of abstract objects called RDD. Spark provides Mllib, a dedicated library for machine learning, but Mllib only includes k-means that can process only continuous data, so there is a limitation that categorical data processing is impossible. In this paper, we design RDD for k-Modes algorithm for categorical data clustering in spark environment and implement an algorithm that can operate effectively. Experiments show that the proposed algorithm increases linearly in the spark environment.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

Improving Performance based on Processing Analysis of Big data log file (벅데이터 로그파일 처리 분석을 통한 성능 개선 방안)

  • Lee, Jaehan;Yu, Heonchang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.539-541
    • /
    • 2016
  • 최근 빅데이터 분석을 위해 아파치 하둡(Apache Hadoop) 기반 에코시스템(Ecosystern)이 다양하게 활용되고 있다. 본 논문에서는 수집된 로그 데이터를 가공하여 데이터베이스에 로드하는 과정을 효율적으로 처리하기 위한 성능 평가를 수행한다. 이를 기반으로 텍스트 파일의 로그 데이터를 자바 코드로 개발된 프로그램에서 JDBC를 이용하여 오라클(Oracle) 데이터베이스에 삽입(Insert)하는 과정의 성능을 개선하기 위한 방안을 제안한다. 대용량 로그 파일의 효율적인 처리를 위해 하둡 에코시스템을 이용하여 처리 속도를 개선하고, 최근 인메모리(In-Mernory) 처리 방식으로 빠른 처리 속도로 인해 각광받고 있는 아파치 스파크(Apache Spark)를 이용한 처리와의 성능 평가를 수행한다. 이 연구를 통해 최적의 로그데이터 처리 시스템의 구축 방안을 제안한다.

A Study for Big Data Analytics Platform with Raspberry Pi Cluster and Apache Spark (라즈베리 파이 클러스터와 아파치 스파크를 활용한 빅데이터 분석 플랫폼 연구)

  • Kim, Young-Sun;Park, Ji-Young;Yoon, Bo-Ram;Lee, Jung-Hyun;Yong, Hwan-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1272-1275
    • /
    • 2015
  • 최근 관심이 증대되고 있는 빅데이터 분석 및 처리를 위한 병렬분산처리 시스템은 대용량 서버가 필요하고 인프라 구축을 위해 고비용을 지불해야 한다. 이를 해결하기 위해 본 연구에서는 저렴한 라즈베리 파이로 클러스터를 구성하고, 하둡보다 빠른 속도의 처리를 제공하는 아파치 스파크를 분석 솔루션으로 하는 빅데이터 분석 플랫폼을 구축하였다. 구축한 플랫폼이 빅데이터 활용을 위해 적절한 성능을 보이는지 확인하기 위해 텍스트 마이닝을 수행하였고, 분석 결과 유효한 성능을 보였다. 적절한 비용으로 빅데이터 분석이 가능해지면서 중소기업과 개인, 교육 기관에서도 빅데이터 활용이 가능해지면서 활용 분야가 크게 확대될 것으로 보인다.

Appingpot : Application curation platform based on Hadoop and Spark (Appingpot : 하둡 및 스파크를 활용한 어플리케이션 큐레이션 플랫폼)

  • Jeon, Sangwoo;Shim, Euiseok;Chi, Jeonghee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.372-373
    • /
    • 2016
  • 현재 해외뿐만 아니라 국내에서도 큐레이션 서비스가 활발히 운영중이다. 폭발적으로 증가한 어플리케이션 마켓 시장에서 사용자들은 자신에게 맞는 앱을 찾고 설치하기 어려워지고 있다. 이에 대응하여 본 논문에서는 어플리케이션 큐레이션 서비스인 Appingpot 시스템을 제안한다. Appingpot에서는 사용자들로부터 수집된 앱 로그데이터와 Facebook 친구 정보를 기반으로 Hadoop과 Spark를 통해 사용자들에게 적합한 앱을 추천하는 서비스를 제공한다.

Development of Big Data System for Energy Big Data (에너지 빅데이터를 수용하는 빅데이터 시스템 개발)

  • Song, Mingoo
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.1
    • /
    • pp.24-32
    • /
    • 2018
  • This paper proposes a Big Data system for energy Big Data which is aggregated in real-time from industrial and public sources. The constructed Big Data system is based on Hadoop and the Spark framework is simultaneously applied on Big Data processing, which supports in-memory distributed computing. In the paper, we focus on Big Data, in the form of heat energy for district heating, and deal with methodologies for storing, managing, processing and analyzing aggregated Big Data in real-time while considering properties of energy input and output. At present, the Big Data influx is stored and managed in accordance with the designed relational database schema inside the system and the stored Big Data is processed and analyzed as to set objectives. The paper exemplifies a number of heat demand plants, concerned with district heating, as industrial sources of heat energy Big Data gathered in real-time as well as the proposed system.

Design of Spark SQL Based Framework for Advanced Analytics (Spark SQL 기반 고도 분석 지원 프레임워크 설계)

  • Chung, Jaehwa
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.477-482
    • /
    • 2016
  • As being the advanced analytics indispensable on big data for agile decision-making and tactical planning in enterprises, distributed processing platforms, such as Hadoop and Spark which distribute and handle the large volume of data on multiple nodes, receive great attention in the field. In Spark platform stack, Spark SQL unveiled recently to make Spark able to support distributed processing framework based on SQL. However, Spark SQL cannot effectively handle advanced analytics that involves machine learning and graph processing in terms of iterative tasks and task allocations. Motivated by these issues, this paper proposes the design of SQL-based big data optimal processing engine and processing framework to support advanced analytics in Spark environments. Big data optimal processing engines copes with complex SQL queries that involves multiple parameters and join, aggregation and sorting operations in distributed/parallel manner and the proposing framework optimizes machine learning process in terms of relational operations.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

Development of Real-time High-Fidelity Video Processing System using Hadoop and Spark (하둡 및 스파크를 이용한 초고품질 영상 실시간 처리 시스템 개발)

  • Huh, Jingang;Kim, Yonghwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.326-328
    • /
    • 2018
  • 최근 4K/8K 급 초고품질 콘텐츠의 서비스에 관심이 집중되는 만큼 스트리밍 서비스에 대한 연구도 활발히 이루어지고 있다. 하지만 단일 PC 성능의 한계로 인해 SW 기반 영상 처리에 어려움을 겪고 있다. 본 논문에서는 분산 처리를 통해 실시간 영상 처리가 가능하도록 시스템을 제안한다. 제안한 시스템은 영상 패킷 분석 및 분할, 분산 트랜스코딩, 패킷 통합 단계로 이루어지며 Hadoop 과 Spark 를 이용하여 실시간 분산 처리를 지원한다. 실험 결과는 초고품질 입력 영상($3840{\times}2160@60Hz$, YCbCr 4:2:2, 10-bit)에 대해 평균 74.47fps 의 트랜스코딩 속도를 보인다.

  • PDF