• Title/Summary/Keyword: 하둡 환경

Search Result 95, Processing Time 0.026 seconds

Performance Comparison of Spatial Split Algorithms for Spatial Data Analysis on Spark (Spark 기반 공간 분석에서 공간 분할의 성능 비교)

  • Yang, Pyoung Woo;Yoo, Ki Hyun;Nam, Kwang Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.29-36
    • /
    • 2017
  • In this paper, we implement a spatial big data analysis prototype based on Spark which is an in-memory system and compares the performance by the spatial split algorithm on this basis. In cluster computing environments, big data is divided into blocks of a certain size order to balance the computing load of big data. Existing research showed that in the case of the Hadoop based spatial big data system, the split method by spatial is more effective than the general sequential split method. Hadoop based spatial data system stores raw data as it is in spatial-divided blocks. However, in the proposed Spark-based spatial analysis system, there is a difference that spatial data is converted into a memory data structure and stored in a spatial block for search efficiency. Therefore, in this paper, we propose an in-memory spatial big data prototype and a spatial split block storage method. Also, we compare the performance of existing spatial split algorithms in the proposed prototype. We presented an appropriate spatial split strategy with the Spark based big data system. In the experiment, we compared the query execution time of the spatial split algorithm, and confirmed that the BSP algorithm shows the best performance.

Distributed Recommendation System Using Clustering-based Collaborative Filtering Algorithm (클러스터링 기반 협업 필터링 알고리즘을 사용한 분산 추천 시스템)

  • Jo, Hyun-Je;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.101-107
    • /
    • 2014
  • This paper presents an efficient distributed recommendation system using clustering collaborative filtering algorithm in distributed computing environments. The system was built based on Hadoop distributed computing platform, where distributed Min-hash clustering algorithm is combined with user based collaborative filtering algorithm to optimize recommendation performance. Experiments using Movie Lens benchmark data show that the proposed system can reduce the execution time for recommendation compare to sequential system.

Development of a Privacy-Preserving Big Data Publishing System in Hadoop Distributed Computing Environments (하둡 분산 환경 기반 프라이버시 보호 빅 데이터 배포 시스템 개발)

  • Kim, Dae-Ho;Kim, Jong Wook
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.11
    • /
    • pp.1785-1792
    • /
    • 2017
  • Generally, big data contains sensitive information about individuals, and thus directly releasing it for public use may violate existing privacy requirements. Therefore, privacy-preserving data publishing (PPDP) has been actively researched to share big data containing personal information for public use, while protecting the privacy of individuals with minimal data modification. Recently, with increasing demand for big data sharing in various area, there is also a growing interest in the development of software which supports a privacy-preserving data publishing. Thus, in this paper, we develops the system which aims to effectively and efficiently support privacy-preserving data publishing. In particular, the system developed in this paper enables data owners to select the appropriate anonymization level by providing them the information loss matrix. Furthermore, the developed system is able to achieve a high performance in data anonymization by using distributed Hadoop clusters.

Performance Evaluation Between PC and RaspberryPI Cluster in Apache Spark for Processing Big Data (빅데이터 처리를 위한 PC와 라즈베리파이 클러스터에서의 Apache Spark 성능 비교 평가)

  • Seo, Ji-Hye;Park, Mi-Rim;Yang, Hye-Kyung;Yong, Hwan-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1265-1267
    • /
    • 2015
  • 최근 IoT 기술의 등장으로 저전력 소형 컴퓨터인 라즈베리파이 클러스터가 IoT 데이터 처리를 위해 사용되고 있다. IoT 기술이 발전하면서 다양한 데이터가 생성되고 있으며 IoT 환경에서도 빅데이터 처리가 요구되고 있다. 빅데이터 처리 프레임워크에는 일반적으로 하둡이 사용되고 있으며 이를 대체하는 솔루션으로 Apache Spark가 등장했다. 본 논문에서는 PC와 라즈베리파이 클러스터에서의 성능을 Apache Spark를 통해 비교하였다. 본 실험을 위해 Yelp 데이터를 사용하며 데이터 로드 시간과 Spark SQL을 이용한 데이터 처리 시간을 통해 성능을 비교하였다.

Visualization of Anomaly Detection in Hadoop System Information (하둡 시스템 정보의 이상탐지를 위한 시각화)

  • Yang, Seokwoo;Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.702-705
    • /
    • 2015
  • 본 논문에서는 하듐 환경에서 시스템 정보의 이상탐지를 위한 시각화 기능을 설계 및 구현한다. 제안한 이상탐지 시각화 기능은 크게 세 단계로 구분된다. 먼저, 각 노드로부터 시스템 로그 데이터(캐시 및 메인 메모리)를 수집하여 하이브(Hive) 저장한다. 그리고 저장한 데이터에 3-시그마 규칙을 적용하여 이상탐지를 수행한 후 관계형 데이터베이스에 적합하도록 재가공한다. 마지막으로, 스쿱(Sqoop)을 통해 RDBMS(MariaDB)에 이상탕지 결과를 저장하고, DHTMLX 차트 라이브러리를 사용하여 이를 시각화한다. 시각화 결과, 로그 데이터의 이상탐지와 데이터간의 상관관계를 직관적으로 이해할 수 있게 되었다.

Learning System for Big Data Analysis based on the Raspberry Pi Board (라즈베리파이 보드 기반의 빅데이터 분석을 위한 학습 시스템)

  • Kim, Young-Geun;Jo, Min-Hui;Kim, Won-Jung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.4
    • /
    • pp.433-440
    • /
    • 2016
  • In order to construct a system for big data processing, one needs to configure the node by using network equipments to connect multiple computers or establish cloud environments through virtual hosts on a single computer. However, there are many restrictions on constructing the big data analysis system including complex system configuration and cost. These constraints are becoming a major obstacle to professional manpower training for big data areas which is emerging as one of the most important national competitiveness. As a result, for professional manpower training of big data areas, this paper proposes a Raspberry Pi Board based educational big data processing system which is capable of practical training at an affordable price.

Performance Factor of Distributed Processing of Machine Learning using Spark (스파크를 이용한 머신러닝의 분산 처리 성능 요인)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.19-24
    • /
    • 2021
  • In this paper, we study performance factor of machine learning in the distributed environment using Apache Spark and presents an efficient distributed processing method through experiments. This work firstly presents performance factor when performing machine learning in a distributed cluster by classifying cluster performance, data size, and configuration of spark engine. In addition, performance study of regression analysis using Spark MLlib running on the Hadoop cluster is performed while changing the configuration of the node and the Spark Executor. As a result of the experiment, it was confirmed that the effective number of executors was affected by the number of data blocks, but depending on the cluster size, the maximum and minimum values were limited by the number of cores and the number of worker nodes, respectively.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

Analysis of Factors for Korean Women's Cancer Screening through Hadoop-Based Public Medical Information Big Data Analysis (Hadoop기반의 공개의료정보 빅 데이터 분석을 통한 한국여성암 검진 요인분석 서비스)

  • Park, Min-hee;Cho, Young-bok;Kim, So Young;Park, Jong-bae;Park, Jong-hyock
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1277-1286
    • /
    • 2018
  • In this paper, we provide flexible scalability of computing resources in cloud environment and Apache Hadoop based cloud environment for analysis of public medical information big data. In fact, it includes the ability to quickly and flexibly extend storage, memory, and other resources in a situation where log data accumulates or grows over time. In addition, when real-time analysis of accumulated unstructured log data is required, the system adopts Hadoop-based analysis module to overcome the processing limit of existing analysis tools. Therefore, it provides a function to perform parallel distributed processing of a large amount of log data quickly and reliably. Perform frequency analysis and chi-square test for big data analysis. In addition, multivariate logistic regression analysis of significance level 0.05 and multivariate logistic regression analysis of meaningful variables (p<0.05) were performed. Multivariate logistic regression analysis was performed for each model 3.

Matrix-based Filtering and Load-balancing Algorithm for Efficient Similarity Join Query Processing in Distributed Computing Environment (분산 컴퓨팅 환경에서 효율적인 유사 조인 질의 처리를 위한 행렬 기반 필터링 및 부하 분산 알고리즘)

  • Yang, Hyeon-Sik;Jang, Miyoung;Chang, Jae-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.7
    • /
    • pp.667-680
    • /
    • 2016
  • As distributed computing platforms like Hadoop MapReduce have been developed, it is necessary to perform the conventional query processing techniques, which have been executed in a single computing machine, in distributed computing environments efficiently. Especially, studies on similarity join query processing in distributed computing environments have been done where similarity join means retrieving all data pairs with high similarity between given two data sets. But the existing similarity join query processing schemes for distributed computing environments have a problem of skewed computing load balance between clusters because they consider only the data transmission cost. In this paper, we propose Matrix-based Load-balancing Algorithm for efficient similarity join query processing in distributed computing environment. In order to uniform load balancing of clusters, the proposed algorithm estimates expected computing cost by using matrix and generates partitions based on the estimated cost. In addition, it can reduce computing loads by filtering out data which are not used in query processing in clusters. Finally, it is shown from our performance evaluation that the proposed algorithm is better on query processing performance than the existing one.