• Title/Summary/Keyword: Mapreduce

Search Result 22, Processing Time 0.026 seconds

Scaling of Hadoop Cluster for Cost-Effective Processing of MapReduce Applications (비용 효율적 맵리듀스 처리를 위한 클러스터 규모 설정)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • This paper studies a method for estimating the scale of a Hadoop cluster to process big data as a cost-effective manner. In the case of medical institutions, demands for cloud-based big data analysis are increasing as medical records can be stored outside the hospital. This paper first analyze the Amazon EMR framework, which is one of the popular cloud-based big data framework. Then, this paper presents a efficiency model for scaling the Hadoop cluster to execute a Mapreduce application more cost-effectively. This paper also analyzes the factors that influence the execution of the Mapreduce application by performing several experiments under various conditions. The cost efficiency of the analysis of the big data can be increased by setting the scale of cluster with the most efficient processing time compared to the operational cost.

A Hot-Data Replication Scheme Based on Data Access Patterns for Enhancing Processing Speed of MapReduce (맵-리듀스의 처리 속도 향상을 위한 데이터 접근 패턴에 따른 핫-데이터 복제 기법)

  • Son, Ingook;Ryu, Eunkyung;Park, Junho;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.11
    • /
    • pp.21-27
    • /
    • 2013
  • In recently years, with the growth of social media and the development of mobile devices, the data have been significantly increased. Hadoop has been widely utilized as a typical distributed storage and processing framework. The tasks in Mapreduce based on the Hadoop distributed file system are allocated to the map as close as possible by considering the data locality. However, there are data being requested frequently according to the data analysis tasks of Mapreduce. In this paper, we propose a hot-data replication mechanism to improve the processing speed of Mapreduce according to data access patterns. The proposed scheme reduces the task processing time and improves the data locality using the replica optimization algorithm on the high access frequency of hot data. It is shown through performance evaluation that the proposed scheme outperforms the existing scheme in terms of the load of access frequency.

Data Prefetching and Streaming for Improving the Performance of Mapreduce of Hadoop (하둡 맵리듀스 성능 향상을 위한 데이터 프리패칭과 스트리밍)

  • Lee, Jung June;Kim, Kyung Tae;Youn, Hee Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.151-154
    • /
    • 2015
  • 최근 소셜 네트워크, 바이오 컴퓨팅, 사물 인터넷 등의 출현으로 인해 기존의 IT환경보다 많은 데이터가 생성되고 있고, 이로 인해 효율적인 대용량 데이터 처리기법에 대한 연구가 진행 되고 있다. 맵리듀스는 데이터 집약적인 연산 어플리케이션에 효과적인 프로그래밍 모델로써, 대표적인 맵리듀스 어플리케이션으로는 아파치 소프트웨어 재단에서 개발 지원중인 하둡이 있다. 본 논문은 하둡 맵리듀스의 성능 향상을 위해 데이터 프리패칭 기법과 스트리밍 기법을 제안한다. 하둡 맵리듀스의 성능 이슈 중 하나는 맵리듀스 과정에서 입력 데이터 전송에 의한 작업 지연이다. 이러한 데이터 전송 시간을 최소화하기 위해, 기존 맵리듀스와는 달리 데이터 전송을 담당하는 프리패칭 스레드를 별도로 생성하였다. 그 결과 데이터의 맵리듀스 작업 중에도 데이터 전송이 가능하게 되어 전체 데이터 처리 시간을 줄일 수 있었다. 이러한 프리패칭 기법을 사용해도 하둡 맵리듀스의 특성상 최초 데이터 전송 시에는 작업대기를 하게 되는데, 이 대기시간을 줄이고자 스트리밍 기법을 사용하여 데이터 전송에 의한 대기시간을 추가로 줄일 수 있었다. 제안하는 기법의 성능을 측정하기 위해 수학적인 모델링을 하였으며, 성능 측정결과 기존의 하둡 맵리듀스 및 프리패칭 기법만 적용된 맵리듀스 보다 스트리밍 기법이 추가 적용된 맵리듀스의 성능이 향상되었음을 확인 할 수 있었다.

  • PDF

A Join Query with Aggregation functions Using Mapreduce (집계 함수를 포함하는 조인 질의의 맵리듀스를 사용한 효율적인 처리 기법)

  • Oh, So Hyeon;Lee, Ki Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.132-135
    • /
    • 2015
  • 맵리듀스(MapReduce)는 분산 환경에서의 빅데이터(Big Data), 즉 대용량 데이터를 처리하는 프로그래밍 모델이다. 대용량의 데이터를 분석하기 위해서 집계 함수(Aggregation function)로 데이터를 처리할 수 있다. 본 논문에서는 맵리듀스 환경을 기반으로 SQL 쿼리에서 집계 함수를 더 적은 비용으로 수행하며 효율적으로 처리할 수 있는 두 가지 전략을 제안한다. 두 가지 전략 중 더 높은 성능을 보이는 전략을 더 효율적인 처리 방법으로 판단한다. 첫 번째 전략은 두 테이블을 Join하여 집계 함수를 처리하는 방법이다. 두 번째 전략은 집계 함수를 처리하여 Join에 참여할 튜플의 수를 최소로 줄인 후 Join을 수행하고 다시 집계 함수를 처리하는 방법이다. 두 제안 방법을 비교하기 위하여 실험을 한 결과 두 번째 전략이 더 적은 비용이 드므로 더 효율적인 처리 방법인 것으로 보인다.

A Novel Method of Improving Cache Hit-rate in Hadoop MapReduce using SSD Cache

  • Kim, Jong-Chan;An, Jae-Hoon;Kim, Young-Hwan;Jeon, Ki-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.8
    • /
    • pp.1-6
    • /
    • 2015
  • The MapReduce Program of Hadoop Distributed File System operates on any unspecified nodes due to distributed-parallel process and block replicate for data stability. Since it is difficult to guarantee the cache locality when a Solid State Drive is used as a cache in hadoop, cache hit-rate is decreased. In this paper, we suggest a method to improve cache hit rate by pre-loading the input data of the MapReduce onto the SSD cache. To perform this method, we estimated the blocks that are used on each node by using capacity scheduler and block metadata. Eventually we could increase the performance of SSD cache by loading the blocks onto SSD cache before the Map Task run.

Big Data Analysis Platform Technology R&D Trend through Patent Analysis (특허분석을 통한 빅데이터 분석 플랫폼 기술 개발 동향)

  • Rho, Seungmin
    • Journal of Digital Convergence
    • /
    • v.12 no.9
    • /
    • pp.169-175
    • /
    • 2014
  • The ICT (information and communication technology) paradigm shift, including the burgeoning use of mobile, SNS, and smart devices, has resulted in an explosion of data along with lifestyle changes. We have thus arrived at the age of big data. In the meantime, a number of difficulties have arisen in terms of cost or on the technical side with respect to the use of large quantities of data. However, big data has begun to receive attention with the advent of efficient big data technologies such as Hadoop. In this paper, we discuss the patent analysis of big data platform technology research and development in major countries. Especially, we analyzed 2,568 patent applications and registered patents in four countries on December 2010.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

A Parallel HDFS and MapReduce Functions for Emotion Analysis (감성분석을 위한 병렬적 HDFS와 맵리듀스 함수)

  • Back, BongHyun;Ryoo, Yun-Kyoo
    • Journal of the Korea society of information convergence
    • /
    • v.7 no.2
    • /
    • pp.49-57
    • /
    • 2014
  • Recently, opinion mining is introduced to extract useful information from SNS data and to evaluate the true intention of users. Opinion mining are required several efficient techniques to collect and analyze a large amount of SNS data and extract meaningful data from them. Therefore in this paper, we propose a parallel HDFS(Hadoop Distributed File System) and emotion functions based on Mapreduce to extract some emotional information of users from various unstructured big data on social networks. The experiment results have verified that the proposed system and functions perform faster than O(n) for data gathering time and loading time, and maintain stable load balancing for memory and CPU resources.

  • PDF

Improving Join Performance for SPARQL Query Processing in the Clouds (클라우드에서 SPARQL 질의 처리를 위한 조인 성능 향상)

  • Choi, Gyu-Jin;Son, Yun-Hee;Lee, Kyu-Chul
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.700-709
    • /
    • 2016
  • Recently, with the rapid growth of LOD (Linked Open Data) existing methods based on a single machine have limitation in performance. Existing solutions use distributed framework such as Mapreduce in order to improve the performance. However, the MapReduce framework for processing SPARQL queries involves multiple MapReduce jobs and additional costs incurred. In addition, the problem of unnecessary data processing arises. In this study, we proposed a method to reduce the number of MapReduce jobs during SPARQL query processing and join indexes based on Bitmap for minimizing the costs of processing unnecessary data.

Design and Implementation of Cloud-based Sensor Data Management System (클라우드 기반 센서 데이터 관리 시스템 설계 및 구현)

  • Park, Kyoung-Wook;Kim, Kyong-Og;Ban, Kyeong-Jin;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.672-677
    • /
    • 2010
  • Recently, the efficient management system for large-scale sensor data has been required due to the increasing deployment of large-scale sensor networks. In this paper, we propose a cloud-based sensor data management system with low cast, high scalability, and efficiency. Sensor data in sensor networks are transmitted to the cloud through a cloud-gateway. At this point, outlier detection and event processing is performed. Transmitted sensor data are stored in the Hadoop HBase, distributed column-oriented database, and processed in parallel by query processing module designed as the MapReduce model. The proposed system can be work with the application of a variety of platforms, because processed results are provided through REST-based web service.