• Title/Summary/Keyword: Hadoop Cluster

Search Result 73, Processing Time 0.027 seconds

Design and Implementation of a Monitor for Hadoop Cluster (Hadoop 클러스터를 위한 모니터의 설계 및 구현)

  • Keum, Tae-Hoon;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.8-15
    • /
    • 2012
  • In this paper, we propose a new monitor for collecting job information from Hadoop clusters in real time. This monitor is made of two programs called Collector and Agent. Agent collects Hadoop cluster's node information and job information, and Collector analyzes the collected information and saves it in a database. Also, Collector was placed in a new node outside the Hadoop cluster so that it does not affect Hadoop's work and will not cause overload. When the proposed monitor was implemented and applied, the testbed cluster was able to detect the occurrence of dead nodes immediately. In addition, we were able to find Hadoop jobs which were inefficient and when we modified such jobs to further enhance the performance of Hadoop.

An Analytical Approach to Evaluation of SSD Effects under MapReduce Workloads

  • Ahn, Sungyong;Park, Sangkyu
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.511-518
    • /
    • 2015
  • As the cost-per-byte of SSDs dramatically decreases, the introduction of SSDs to Hadoop becomes an attractive choice for high performance data processing. In this paper the cost-per-performance of SSD-based Hadoop cluster (SSD-Hadoop) and HDD-based Hadoop cluster (HDD-Hadoop) are evaluated. For this, we propose a MapReduce performance model using queuing network to simulate the execution time of MapReduce job with varying cluster size. To achieve an accurate model, the execution time distribution of MapReduce job is carefully profiled. The developed model can precisely predict the execution time of MapReduce jobs with less than 7% difference for most cases. It is also found that SSD-Hadoop is 20% more cost efficient than HDD-Hadoop because SSD-Hadoop needs a smaller number of nodes than HDD-Hadoop to achieve a comparable performance, according to the results of simulation with varying the number of cluster nodes.

Task Assignment Policy for Hadoop Considering Availability of Nodes (노드의 가용성을 고려한 하둡 태스크 할당 정책)

  • Ryu, Wooseok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.103-105
    • /
    • 2017
  • Hadoop MapReduce is a processing framework in which users' job can be efficiently processed in parallel and distributed ways on the Hadoop cluster. MapReduce task schedulers are used to select target nodes and assigns user's tasks to them. Previous schedulers cannot fully utilize resources of Hadoop cluster because they does not consider dynamic characteristics of cluster based on nodes' availability. To increase utilization of Hadoop cluster, this paper proposes a novel task assignment policy for MapReduce that assigns a job tasks to dynamic cluster efficiently by considering availability of each node.

  • PDF

Performance Evaluation of MapReduce Application running on Hadoop (Hadoop 상에서 MapReduce 응용프로그램 평가)

  • Kim, Junsu;Kang, Yunhee;Park, Youngbom
    • Journal of Software Engineering Society
    • /
    • v.25 no.4
    • /
    • pp.63-67
    • /
    • 2012
  • According to the growth of data being generated in man fields, a distributed programming model MapReduce has been introduced to handle it. In this paper, we build two cluster system with Solaris and Linux environment on SUN Blade150 respectively and then to evaluate the performance of a MapReduce application running on MapReduce middleware Hadoop in terms of its average elapse time and standard deviation. As a result of this experiment, we show that the overall performance of the MapReduce application based on Hadoop is affected by the configuration of the cluster system.

  • PDF

A Novel Node Management in Hadoop Cluster by using DNA

  • Balaraju. J;PVRD. Prasada Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.134-140
    • /
    • 2023
  • The distributed system is playing a vital role in storing and processing big data and data generation is speedily increasing from various sources every second. Hadoop has a scalable, and efficient distributed system supporting commodity hardware by combining different networks in the topographical locality. Node support in the Hadoop cluster is rapidly increasing in different versions which are facing difficulty to manage clusters. Hadoop does not provide Node management, adding and deletion node futures. Node identification in a cluster completely depends on DHCP servers which managing IP addresses, hostname based on the physical address (MAC) address of each Node. There is a scope to the hacker to theft the data using IP or Hostname and creating a disturbance in a distributed system by adding a malicious node, assigning duplicate IP. This paper proposing novel node management for the distributed system using DNA hiding and generating a unique key using a unique physical address (MAC) of each node and hostname. The proposed mechanism is providing better node management for the Hadoop cluster providing adding and deletion node mechanism by using limited computations and providing better node security from hackers. The main target of this paper is to propose an algorithm to implement Node information hiding in DNA sequences to increase and provide security to the node from hackers.

Design and Implementation of Distributed Cluster Supporting Dynamic Down-Scaling of the Cluster (노드의 동적 다운 스케일링을 지원하는 분산 클러스터 시스템의 설계 및 구현)

  • Woo-Seok Ryu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.361-366
    • /
    • 2023
  • Apache Hadoop, a representative framework for distributed processing of big data, has the advantage of increasing cluster size up to thousands of nodes to improve parallel distributed processing performance. However, reducing the size of the cluster is limited to the extent of permanently decommissioning nodes with defects or degraded performance, so there are limitations to operate multiple nodes flexibly in small clusters. In this paper, we discuss the problems that occur when removing nodes from the Hadoop cluster and propose a dynamic down-scaling technique to manage the distributed cluster more flexibly. To do this, we design and implement a modified Hadoop system and interfaces to support dynamic down-scaling of the cluster which supports temporary pause of a node and reconnection of it when necessary, rather than decommissioning the node when removing a node from the Hadoop cluster. We have verified that effective downsizing can be performed without performance degradation based on experimental results.

RDP: A storage-tier-aware Robust Data Placement strategy for Hadoop in a Cloud-based Heterogeneous Environment

  • Muhammad Faseeh Qureshi, Nawab;Shin, Dong Ryeol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4063-4086
    • /
    • 2016
  • Cloud computing is a robust technology, which facilitate to resolve many parallel distributed computing issues in the modern Big Data environment. Hadoop is an ecosystem, which process large data-sets in distributed computing environment. The HDFS is a filesystem of Hadoop, which process data blocks to the cluster nodes. The data block placement has become a bottleneck to overall performance in a Hadoop cluster. The current placement policy assumes that, all Datanodes have equal computing capacity to process data blocks. This computing capacity includes availability of same storage media and same processing performances of a node. As a result, Hadoop cluster performance gets effected with unbalanced workloads, inefficient storage-tier, network traffic congestion and HDFS integrity issues. This paper proposes a storage-tier-aware Robust Data Placement (RDP) scheme, which systematically resolves unbalanced workloads, reduces network congestion to an optimal state, utilizes storage-tier in a useful manner and minimizes the HDFS integrity issues. The experimental results show that the proposed approach reduced unbalanced workload issue to 72%. Moreover, the presented approach resolve storage-tier compatibility problem to 81% by predicting storage for block jobs and improved overall data block placement by 78% through pre-calculated computing capacity allocations and execution of map files over respective Namenode and Datanodes.

IoT Data Processing System Using a Public Cloud based Hadoop Cluster (Public Cloud 기반 Hadoop Cluster를 이용한 IoT 데이터 처리 시스템 설계)

  • Lee, Hwangro;Choi, Eunmi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.188-191
    • /
    • 2013
  • 인간과 사물, 서비스 세 가지 분산된 환경 요소에 대해 인간의 명시적 개입 없이 상호 협력적으로 센싱, 네트워킹, 정보 처리 등 지능적 관계를 형성하는 사물 공간 연결망인 IoT(Internet of Things)에서 센싱된 정보를 처리하고 서비스하기 위한 환경을 적시적소에 배치(Depolyment) 하기 위하여 클라우드 서비스와의 연동방법에 대해 본 논문에서 연구하였다. Public Cloud환경에서 Hadoop Cluster를 구성하여 IoT 서비스에 적용할 수 있는 통합 환경을 구축하면 폭발적으로 증가하는 IoT 데이터를 저장하고 빠른 시간안에 이를 효과적으로 처리 및 분석하기 위한 시스템 구축이 가능하며 분산 저장소에 저장된 데이터를 분석하고 의미있는 지식을 발견하여 새로운 비즈니스 모델 창출에 기여할 수 있다. 본 논문에서 Public Cloud 환경에서 Hadoop Clouster를 구성하여 IoT에서 생성되는 데이터를 효과적으로 처리하고 분석할 수 있는 방법을 제안한다.

Scaling of Hadoop Cluster for Cost-Effective Processing of MapReduce Applications (비용 효율적 맵리듀스 처리를 위한 클러스터 규모 설정)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • This paper studies a method for estimating the scale of a Hadoop cluster to process big data as a cost-effective manner. In the case of medical institutions, demands for cloud-based big data analysis are increasing as medical records can be stored outside the hospital. This paper first analyze the Amazon EMR framework, which is one of the popular cloud-based big data framework. Then, this paper presents a efficiency model for scaling the Hadoop cluster to execute a Mapreduce application more cost-effectively. This paper also analyzes the factors that influence the execution of the Mapreduce application by performing several experiments under various conditions. The cost efficiency of the analysis of the big data can be increased by setting the scale of cluster with the most efficient processing time compared to the operational cost.

Monitoring Tool for Hadoop Cluster (Hadoop 클러스터를 위한 모니터링 툴)

  • Keum, Tae-Hoon;Lee, Won-Joo;Jeon, Chang-Ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.17-18
    • /
    • 2010
  • 최근 이슈가 되고 있는 클라우드 컴퓨팅은 다수의 노드를 이용한 클러스터를 사용한다. 이러한 클러스터를 효율적으로 관리하기 위해 모니터링 툴을 사용하고 있다. 하지만, 기존의 모니터링 툴은 클러스터를 구성하는 노드의 가용성과 오버헤드, 데이터 수집/전송 방식에 중심을 둔 모니터링 툴이기 때문에 클라우드 클러스터의 세부 정보까지 모니터링 할 수 없다. 따라서 본 논문에서는 클라우드 컴퓨팅을 구축할 수 있는 플랫폼인 Hadoop을 위한 모니터링 툴을 제안한다.

  • PDF