• Title/Summary/Keyword: SPARK 플랫폼

Search Result 35, Processing Time 0.04 seconds

Design and Implementation of a Big Data Analytics Framework based on Cargo DTG Data for Crackdown on Overloaded Trucks

  • Kim, Bum-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.67-74
    • /
    • 2019
  • In this paper, we design and implement an analytics platform based on bulk cargo DTG data for crackdown on overloaded trucks. DTG(digital tachograph) is a device that stores the driving record in real time; that is, it is a device that records the vehicle driving related data such as GPS, speed, RPM, braking, and moving distance of the vehicle in one second unit. The fast processing of DTG data is essential for finding vehicle driving patterns and analytics. In particular, a big data analytics platform is required for preprocessing and converting large amounts of DTG data. In this paper, we implement a big data analytics framework based on cargo DTG data using Spark, which is an open source-based big data framework for crackdown on overloaded trucks. As the result of implementation, our proposed platform converts real large cargo DTG data sets into GIS data, and these are visualized by a map. It also recommends crackdown points.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

Big Data Platform for Learning in Cloud Computing Environment (클라우드 컴퓨팅 환경에서의 학습용 빅 데이터 플랫폼 설계)

  • Kim, Jun Heon
    • Proceedings of The KACE
    • /
    • 2017.08a
    • /
    • pp.63-64
    • /
    • 2017
  • 정보 기술의 끊임없는 발전에 따라 광범위한 분야에서 방대한 양의 데이터가 발생하게 되면서 이를 처리하기 위한 빅 데이터에 대한 연구 및 교육이 활발히 진행되고 있다. 이를 위하여 데이터 분석 및 처리를 위한 고성능의 서버 및 분산 처리를 위한 다수의 컴퓨터가 필요하며 이는, 개인 혹은 저사양의 수업 환경에서 빅 데이터를 학습하는 데에 어려움을 겪게 한다. 때문에 가상 환경에서 원활한 빅 데이터 학습을 위한 클라우드 기반의 시스템이 필요하다. 이에 본 논문에서는, 빅 데이터 처리 기술의 하나인 Spark를 이용한 빅 데이터 플랫폼 구축에 대하여 기술한다.

  • PDF

A Performance Comparison of Machine Learning Library based on Apache Spark for Real-time Data Processing (실시간 데이터 처리를 위한 아파치 스파크 기반 기계 학습 라이브러리 성능 비교)

  • Song, Jun-Seok;Kim, Sang-Young;Song, Byung-Hoo;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.15-16
    • /
    • 2017
  • IoT 시대가 도래함에 따라 실시간으로 대규모 데이터가 발생하고 있으며 이를 효율적으로 처리하고 활용하기 위한 분산 처리 및 기계 학습에 대한 관심이 높아지고 있다. 아파치 스파크는 RDD 기반의 인 메모리 처리 방식을 지원하는 분산 처리 플랫폼으로 다양한 기계 학습 라이브러리와의 연동을 지원하여 최근 차세대 빅 데이터 분석 엔진으로 주목받고 있다. 본 논문에서는 아파치 스파크 기반 기계 학습 라이브러리 성능 비교를 통해 아파치 스파크와 연동 가능한 기계 학습라이브러리인 MLlib와 아파치 머하웃, SparkR의 데이터 처리 성능을 비교한다. 이를 위해, 대표적인 기계 학습 알고리즘인 나이브 베이즈 알고리즘을 사용했으며 학습 시간 및 예측 시간을 비교하여 아파치 스파크 기반에서 실시간 데이터 처리에 적합한 기계 학습 라이브러리를 확인한다.

  • PDF

Design of a Platform for Collecting and Analyzing Agricultural Big Data (농업 빅데이터 수집 및 분석을 위한 플랫폼 설계)

  • Nguyen, Van-Quyet;Nguyen, Sinh Ngoc;Kim, Kyungbaek
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.149-158
    • /
    • 2017
  • Big data have been presenting us with exciting opportunities and challenges in economic development. For instance, in the agriculture sector, mixing up of various agricultural data (e.g., weather data, soil data, etc.), and subsequently analyzing these data deliver valuable and helpful information to farmers and agribusinesses. However, massive data in agriculture are generated in every minute through multiple kinds of devices and services such as sensors and agricultural web markets. It leads to the challenges of big data problem including data collection, data storage, and data analysis. Although some systems have been proposed to address this problem, they are still restricted either in the type of data, the type of storage, or the size of data they can handle. In this paper, we propose a novel design of a platform for collecting and analyzing agricultural big data. The proposed platform supports (1) multiple methods of collecting data from various data sources using Flume and MapReduce; (2) multiple choices of data storage including HDFS, HBase, and Hive; and (3) big data analysis modules with Spark and Hadoop.

Item Recommendation Technique Using Spark (Spark를 이용한 항목 추천 기법에 관한 연구)

  • Yun, So-Young;Youn, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.5
    • /
    • pp.715-721
    • /
    • 2018
  • With the spread of mobile devices, the users of social network services or e-commerce sites have increased dramatically, and the amount of data produced by the users has increased exponentially. E-commerce companies have faced a task regarding how to extract useful information from a vast amount of data produced by the users. To solve this problem, there are various studies applying big data processing technique. In this paper, we propose a collaborative filtering method that applies the tag weight in the Apache Spark platform. In order to elevate the accuracy of recommendation, the proposed method refines the tag data in the preprocessing process and categorizes the items and then applies the information of periods and tag weight to the estimate rating of the items. After generating RDD, we calculate item similarity and prediction values and recommend items to users. The experiment result indicated that the proposed method process large amounts of data quickly and improve the appropriateness of recommendation better.

A Study for Big Data Analytics Platform with Raspberry Pi Cluster and Apache Spark (라즈베리 파이 클러스터와 아파치 스파크를 활용한 빅데이터 분석 플랫폼 연구)

  • Kim, Young-Sun;Park, Ji-Young;Yoon, Bo-Ram;Lee, Jung-Hyun;Yong, Hwan-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1272-1275
    • /
    • 2015
  • 최근 관심이 증대되고 있는 빅데이터 분석 및 처리를 위한 병렬분산처리 시스템은 대용량 서버가 필요하고 인프라 구축을 위해 고비용을 지불해야 한다. 이를 해결하기 위해 본 연구에서는 저렴한 라즈베리 파이로 클러스터를 구성하고, 하둡보다 빠른 속도의 처리를 제공하는 아파치 스파크를 분석 솔루션으로 하는 빅데이터 분석 플랫폼을 구축하였다. 구축한 플랫폼이 빅데이터 활용을 위해 적절한 성능을 보이는지 확인하기 위해 텍스트 마이닝을 수행하였고, 분석 결과 유효한 성능을 보였다. 적절한 비용으로 빅데이터 분석이 가능해지면서 중소기업과 개인, 교육 기관에서도 빅데이터 활용이 가능해지면서 활용 분야가 크게 확대될 것으로 보인다.

An Implementation of Web-Enabled OLAP Server in Korean HealthCare BigData Platform (한국 보건의료 빅데이터 플랫폼에서 웹 기반 OLAP 서버 구현)

  • Ly, Pichponreay;Kim, jin-hyuk;Jung, seung-hyun;Lee, kyung-hee Lee;Cho, wan-sup
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2017.05a
    • /
    • pp.33-34
    • /
    • 2017
  • In 2015, Ministry of Health and Welfare of Korea announced a research and development plan of using Korean healthcare data to support decision making, reduce cost and enhance a better treatment. This project relies on the adoption of BigData technology such as Apache Hadoop, Apache Spark to store and process HealthCare Data from various institution. Here we present an approach a design and implementation of OLAP server in Korean HealthCare BigData platform. This approach is used to establish a basis for promoting personalized healthcare research for decision making, forecasting disease and developing customized diagnosis and treatment.

  • PDF

Spark-Based Big Data Preprocessing for Text Summarization (텍스트 요약을 위한 스파크 기반 대용량 데이터 전처리)

  • Ji, Dong-Jun;Jun, Hee-Gook;Im, Dong-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.383-385
    • /
    • 2022
  • 텍스트 요약(Text Summarization)은 자연어 처리(NLP) 분야의 주요 작업 중 하나이다. 높은 정확성을 보이는 문서 요약 딥 러닝 모델을 만들기 위해서 대용량 학습 데이터가 필요한데, 대용량 데이터 전처리 과정에서 처리 시간, 메모리 관리 등과 같은 문제가 발생한다. 본 논문에서는 대규모 병렬처리 플랫폼 Apache Spark 를 사용해 추상 요약 딥 러닝 모델의 데이터 전처리 과정을 개선하는 방법을 제안한다. 실험 결과 제안한 방법이 기존 방법보다 데이터 전처리 시간이 개선된 결과를 보이고 있다.

Development of Big Data and AutoML Platforms for Smart Plants (스마트 플랜트를 위한 빅데이터 및 AutoML 플랫폼 개발)

  • Jin-Young Kang;Byeong-Seok Jeong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.83-95
    • /
    • 2023
  • Big data analytics and AI play a critical role in the development of smart plants. This study presents a big data platform for plant data and an 'AutoML platform' for AI-based plant O&M(Operation and Maintenance). The big data platform collects, processes and stores large volumes of data generated in plants using Hadoop, Spark, and Kafka. The AutoML platform is a machine learning automation system aimed at constructing predictive models for equipment prognostics and process optimization in plants. The developed platforms configures a data pipeline considering compatibility with existing plant OISs(Operation Information Systems) and employs a web-based GUI to enhance both accessibility and convenience for users. Also, it has functions to load user-customizable modules into data processing and learning algorithms, which increases process flexibility. This paper demonstrates the operation of the platforms for a specific process of an oil company in Korea and presents an example of an effective data utilization platform for smart plants.