• Title/Summary/Keyword: distributed parallel computing

Search Result 156, Processing Time 0.031 seconds

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

Decombined Distributed Parallel VQ Codebook Generation Based on MapReduce (맵리듀스를 사용한 디컴바인드 분산 VQ 코드북 생성 방법)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.15 no.3
    • /
    • pp.365-371
    • /
    • 2014
  • In the era of big data, algorithms for the existing IT environment cannot accept on a distributed architecture such as hadoop. Thus, new distributed algorithms which apply a distributed framework such as MapReduce are needed. Lloyd's algorithm commonly used for vector quantization is developed using MapReduce recently. In this paper, we proposed a decombined distributed VQ codebook generation algorithm based on a distributed VQ codebook generation algorithm using MapReduce to get a result more fast. The result of applying the proposed algorithm to big data showed higher performance than the conventional method.

Performance Analysis of Distributed Parallel Processing Schemes for Large Data in Cloud Computing (클라우드 컴퓨팅에서의 대규모 데이터를 위한 분산 병렬 처리 기법의 성능분석)

  • Hong, Seung-Tae;Chang, Jae-Woo
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2010.09a
    • /
    • pp.111-118
    • /
    • 2010
  • 최근 IT 분야에서 인터넷을 기반으로 IT 자원들을 서비스 형태로 제공하는 클라우드 컴퓨팅에 대한 연구가 활발히 진행되고 있다. 한편, 효율적인 클라우드 컴퓨팅을 제공하기 위해서는, 막대한 양의 데이터를 수많은 서버들에 분산 처장하고 관리하기 위한 분산 데이터 처장 기법 빛 분산 병렬 처리 기법에 대한 연구가 필수적이다. 이를 위해 본 논문에서는 대표적인 분산 병렬 처리 기법에 대해 살펴보고, 이를 비교 분석한다. 마지막으로 Hadoop 기반 클러스터를 구축하고 이를 통해서 대규모 데이터를 위한 분산 병렬 처리 기법에 대한 성능평가를 수행한다.

  • PDF

A Grid Service based on OGSA for Process Fault Detection (프로세스 결함 검출을 위한 OGSA 기반 그리드 서비스의 설계 및 구현)

  • Kang, Yun-Hee
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.314-317
    • /
    • 2004
  • With the advance of network and software infrastructure, Grid-computing technology on a cluster of heterogeneous computing resources becomes pervasive. Grid computing is required a coordinated use of an assembly of distributed computers, which are linked by WAN. As the number of grid system components increases, the probability of failure in the grid computing is higher than that in a traditional parallel computing. To provide the robustness of grid applications, fault detection is critical and is essential elements in design and implementation. In this paper, a OGSA based process fault-detection services presented to provide high reliability under low network traffic environment.

  • PDF

An Adaptive Workflow Scheduling Scheme Based on an Estimated Data Processing Rate for Next Generation Sequencing in Cloud Computing

  • Kim, Byungsang;Youn, Chan-Hyun;Park, Yong-Sung;Lee, Yonggyu;Choi, Wan
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.555-566
    • /
    • 2012
  • The cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batch-processing scheme in a local computing farm and data storage. In the case of a large-scale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resource-provisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

Efficient Processing of Huge Airborne Laser Scanned Data Utilizing Parallel Computing and Virtual Grid (병렬처리와 가상격자를 이용한 대용량 항공 레이저 스캔 자료의 효율적인 처리)

  • Han, Soo-Hee;Heo, Joon;Lkhagva, Enkhbaatar
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.4
    • /
    • pp.21-26
    • /
    • 2008
  • A method for processing huge airborne laser scanned data using parallel computing and virtual grid is proposed and the method is tested by generating raster DSM(Digital Surface Model) with IDW(Inverse Distance Weighting). Parallelism is involved for fast interpolation of huge point data and virtual grid is adopted for enhancing searching efficiency of irregularly distributed point data. Processing time was checked for the method using cluster constituted of one master node and six slave nodes, resulting in efficiency near to 1 and load scalability property. Also large data which cannot be processed with a sole system was processed with cluster system.

  • PDF

An Iterative Algorithm for the Bottom Up Computation of the Data Cube using MapReduce (맵리듀스를 이용한 데이터 큐브의 상향식 계산을 위한 반복적 알고리즘)

  • Lee, Suan;Jo, Sunhwa;Kim, Jinho
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.4
    • /
    • pp.455-464
    • /
    • 2012
  • Due to the recent data explosion, methods which can meet the requirement of large data analysis has been studying. This paper proposes MRIterativeBUC algorithm which enables efficient computation of large data cube by distributed parallel processing with MapReduce framework. MRIterativeBUC algorithm is developed for efficient iterative operation of the BUC method with MapReduce, and overcomes the limitations about the storage size and processing ability caused by large data cube computation. It employs the idea from the iceberg cube which computes only the interesting aspect of analysts and the distributed parallel process of cube computation by partitioning and sorting. Thus, it reduces data emission so that it can reduce network overload, processing amount on each node, and eventually the cube computation cost. The bottom-up cube computation and iterative algorithm using MapReduce, proposed in this paper, can be expanded in various way, and will make full use of many applications.

An Efficient Distributed Shared Memory System for Parallel GIS (병렬 GIS를 위한 효율적인 분산공유메모리 시스템)

  • Jeong, Sang-Hwa;Ryu, Gwang-Yeol;Go, Yun-Yeong;Gwak, Min-Seok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.6
    • /
    • pp.700-707
    • /
    • 1999
  • 본 논문에서는 GIS 관련 연산을 실시간에 효율적으로 처리하기 위한 분산공유메모리 기반 병렬처리 시스템을 제안한다. 본 논문의 분산공유메모리 시스템은 메시지전달 방식의 분산메모리 MIMD 컴퓨터 상에 소프트웨어 기반 분산공유메모리 모듈을 탑재함으로써 구현되었다. 또한 GIS 연산의 기본이 되는 공간 객체를 공유의 기본 단위로 설정하고, GIS 데이타의 특성을 반영하여 읽기전용 공유데이타 타입을 추가하였으며, 네트워크 오버헤드를 줄이기 위하여 복수의 객체를 한번에 읽어오는 bulk access가 가능하도록 하였다. 본 시스템에서는 GIS 데이타의 효율적인 분배를 위하여 부하균등화 기법으로 guided self scheduling을 사용하였다. 실험결과 본 시스템은 네트워크 캐쉬의 효율적인 활용을 통하여 소프트웨어 기반 분산메모리 시스템의 오버헤드에도 불구하고 MPI 기반 메시지전달 방식에 비하여 향상된 성능을 얻을 수 있었다.Abstract In this paper, we propose a distributed shared memory(DSM) based parallel processing system to process GIS related computations efficiently in real time. The system is based on a software DSM module implemented on top of a distributed MIMD computer. In the DSM system, spatial object, which is a fundamental structure to represent GIS data, is used as a basic unit for sharing, and a read-only shared data type is added to reflect the characteristics of GIS data. In addition, a bulk access to multiple shared data is made possible to reduce the network overhead. A guided self scheduling method is devised for efficient load balancing in distributing GIS data to parallel processors. The experimental results show that the DSM system performs better than an MPI based message-passing system through the efficient utilization of network cache in spite of the system's software overhead.

A Novel High Performance List Scheduling Algorithm for Distributed Heterogeneous Computing Systems (분산 이기종 컴퓨팅 시스템을 위한 새로운 고성능 리스트 스케줄링 알고리즘)

  • Yoon, Wan-Oh;Yoon, Jun-Chul;Yoon, Jung-Hee;Choi, Sang-Bang
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.135-145
    • /
    • 2010
  • Efficient Directed Acyclic Graph(DAG) scheduling is critical for achieving high performance in Distributed Heterogeneous computing System(DHCS). In this paper, we present a new high-performance scheduling algorithm, called the LCFT(Levelized Critical First Task) algorithm, for DHCS. The LCFT algorithm is a list-based scheduling that uses a new attribute to efficiently select tasks for scheduling in DHCS. The complexity of LCFT is $O(\upsilon+e)(p+log\;\upsilon)$. The performance of the algorithm has been observed by its application to some practical DAGs, and by comparing it with other existing scheduling algorithms such as PETS, HPS, HCPT and GCA in terms of the schedule length and SpeedUp. The comparison studies show that LCFT significantly outperforms PETS, HPS, HCPT and GCA in schedule length, SpeedUp.