• Title/Summary/Keyword: GPU Cluster

Search Result 26, Processing Time 0.027 seconds

Implementation of a GPU Cluster System using Inexpensive Graphics Devices (저가의 그래픽스 장치를 이용한 GPU 클러스터 시스템 구현)

  • Lee, Jong-Min;Lee, Jung-Hwa;Kim, Seong-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1458-1466
    • /
    • 2011
  • Recently the research on GPGPU has been carried out actively as the performance of GPUs has been increased rapidly. In this paper, we propose the system architecture by benchmarking the existing supercomputer architecture for a cost-effective system using GPUs in low-cost graphics devices and implement a GPU cluster system with eight GPUs. We also make the software development environment that is suitable for the GPU cluster system and use it for the performance evaluation by implementing the n-body problem. According to its result, we found that it is efficient to use multiple GPUs when the problem size is large due to its communication cost. In addition, we could calculate up to eight million celestial bodies by applying the method of calculating block by block to mitigate the problem size constraint due to the limited resource in GPUs.

Multi-communication layered HPL model and its application to GPU clusters

  • Kim, Young Woo;Oh, Myeong-Hoon;Park, Chan Yeol
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.524-537
    • /
    • 2021
  • High-performance Linpack (HPL) is among the most popular benchmarks for evaluating the capabilities of computing systems and has been used as a standard to compare the performance of computing systems since the early 1980s. In the initial system-design stage, it is critical to estimate the capabilities of a system quickly and accurately. However, the original HPL mathematical model based on a single core and single communication layer yields varying accuracy for modern processors and accelerators comprising large numbers of cores. To reduce the performance-estimation gap between the HPL model and an actual system, we propose a mathematical model for multi-communication layered HPL. The effectiveness of the proposed model is evaluated by applying it to a GPU cluster and well-known systems. The results reveal performance differences of 1.1% on a single GPU. The GPU cluster and well-known large system show 5.5% and 4.1% differences on average, respectively. Compared to the original HPL model, the proposed multi-communication layered HPL model provides performance estimates within a few seconds and a smaller error range from the processor/accelerator level to the large system level.

A design of GPU container co-execution framework measuring interference among applications (GPU 컨테이너 동시 실행에 따른 응용의 간섭 측정 프레임워크 설계)

  • Kim, Sejin;Kim, Yoonhee
    • KNOM Review
    • /
    • v.23 no.1
    • /
    • pp.43-50
    • /
    • 2020
  • As General Purpose Graphics Processing Unit (GPGPU) recently plays an essential role in high-performance computing, several cloud service providers offer GPU service. Most cluster orchestration platforms in a cloud environment using containers allocate the integer number of GPU to jobs and do not allow a node shared with other jobs. In this case, resource utilization of a GPU node might be low if a job does not intensively require either many cores or large size of memory in GPU. GPU virtualization brings opportunities to realize kernel concurrency and share resources. However, performance may vary depending on characteristics of applications running concurrently and interference among them due to resource contention on a node. This paper proposes GPU container co-execution framework with multiple server creation and execution based on Kubernetes, container orchestration platform for measuring interference which may be occurred by sharing GPU resources. Performance changes according to scheduling policies were investigated by executing several jobs on GPU. The result shows that optimal scheduling is not possible only considering GPU memory and computing resource usage. Interference caused by co-execution among applications is measured using the framework.

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

Visualization of Volume Dataset using GPU Cluster and Tiled Display (GPU 클러스터 및 타일형 디스플레이를 이용한 볼륨 데이터의 고해상도 가시화)

  • Lee, Joong-Youn
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.1395-1398
    • /
    • 2005
  • 볼륨 렌더링은 3차원이나 그 이상의 차원의 볼륨 데이터에서 의미있는 정보를 추출해 내어 직관적으로 표출하는 가시화 기법을 말하며 의료영상, 기상학, 유체역학 등 다양한 분야에서 널리 사용되고 있다. 한편, 최근 PC 하드웨어의 급격한 발전으로 과거에는 슈퍼컴퓨터에서나 가능했던 대용량 볼륨 데이터의 가시화가 일반 PC 환경에서도 가능하게 되었다. GPU의 꼭지점 및 픽셀 쉐이더의 수치 계산에 최적화된 벡터 연산으로 빠른 볼륨 가시화를 가능하게 한 것이다. 그러나 GPU의 메모리 용량의 한계로 대용량의 볼륨 데이터를 빠르게 가시화하는 것은 지금까지 어려운 문제로 남아있다. 본 논문에서는 GPU의 텍스쳐 메모리 크기보다 큰 볼륨 데이터를 여러 개의 GPU 메모리에 분산시키고 이를 꼭지점 및 픽셀 쉐이더를 이용하여 빠르게 렌더링하여 타일형 디스플레이에서 고해상도로 가시화하는 시스템을 디자인하고 구현하고자 하였다.

  • PDF

Study on the method of acquiring GPU usage statistics information in cluster system (클러스터 시스템에서 GPU 사용 통계정보 획득 방안에 대한 연구)

  • Kwon, Min-Woo;Kim, Sung-Jun;Yoon, JunWeon;Hong, TaeYoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.476-477
    • /
    • 2018
  • 한국과학기술정보연구원에서는 최근 빅데이터, 인공지능에 관한 연구 인프라 수요를 대응하기 위해 슈퍼컴퓨터 4호기 보조 가속기 시스템인 GPU 클러스터를 운영 중에 있다. GPU 클러스터 시스템은 사용자들 간에 효율적인 작업 배분을 위해 SLURM JOB 스케줄러를 이용하고 있다. 본 논문에서는 SLURM JOB 스케줄러를 통해 실행되는 사용자의 작업별 GPU 사용 통계 정보를 획득하는 방안에 대하여 소개한다.

GPU-Accelerated Password Cracking of PDF Files

  • Kim, Keon-Woo;Lee, Sang-Su;Hong, Do-Won;Ryou, Jae-Cheol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2235-2253
    • /
    • 2011
  • Digital document file such as Adobe Acrobat or MS-Office is encrypted by its own ciphering algorithm with a user password. When this password is not known to a user or a forensic inspector, it is necessary to recover the password to open the encrypted file. Password cracking by brute-force search is a perfect approach to discover the password but a time consuming process. This paper presents a new method of speeding up password recovery on Graphic Processing Unit (GPU) using a Compute Unified Device Architecture (CUDA). PDF files are chosen as a password cracking target, and the Abode Acrobat password recovery algorithm is examined. Experimental results show that the proposed method gives high performance at low cost, with a cluster of GPU nodes significantly speeding up the password recovery by exploiting a number of computing nodes. Password cracking performance is increased linearly in proportion to the number of computing nodes and GPUs.

A study on how to generate GPU usage statistics for each task in a cluster system operated by shared node policy (공유노드 정책으로 운영 중인 클러스터 시스템에서 작업별 GPU 사용 통계 생성 방안에 대한 연구)

  • Kwon, Min-Woo;Yoon, JunWeon;Hong, TaeYoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.37-39
    • /
    • 2022
  • KISTI(한국과학기술정보연구원)는 슈퍼컴퓨터 5호기 메인시스템인 Nurion과 보조시스템인 Neuron을 연구자들에게 서비스하고 있다. Neuron은 메인시스템인 Nurion이 Intel Knights Landing 프로세서가 장착된 클러스터로 구성됨에 따라 인공지능, 빅데이터에 관한 연구 인프라 수요를 충족시키기 위해 GPU를 장착한 이기종 클러스터로 구성되어 있다. Neuron은 연구자들에게 효율적으로 계산 자원을 배분하기 위해 SLURM 작업배치스케줄러의 공유노드 정책을 이용하여 한 개의 계산노드에서 다수 개의 작업이 수행될 수 있는 환경으로 서비스되고 있다. 본 논문에서는 공유노드 정책으로 운영 중인 클러스터 시스템에서 작업별로 GPU 사용 통계 데이터를 생성하는 기법을 소개한다.

A study on comparison and analysis of interconnect network communication performance between computing nodes in GPU cluster system (GPU 클러스터 시스템의 계산노드 간 인터커넥트 네트워크 통신 성능 비교 분석 연구)

  • Min-Woo Kwon;Do-Sik An;TaeYoung Hong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.2-4
    • /
    • 2023
  • KISTI의 GPU 클러스터 시스템인 뉴론은 NVIDIA의 A100과 V100 GPU가 총 260개 탑재되어 있는 클러스터 시스템이다. 뉴론의 계산노드들은 고성능의 인터커넥트인 Infiniband(IB) 케이블로 연결되어 있어 멀티 노드 작업 수행 시에 고대역 병렬통신이 가능하다. 본 논문에서는 NVIDIA사에서 제공하는 NCCL의 벤치마크 코드를 이용하여 인터커넥트 네트워크의 통신 성능을 비교분석하는 방안에 대해서 소개한다.

Scheduling of Artificial Intelligence Workloads in Could Environments Using Genetic Algorithms (유전 알고리즘을 이용한 클라우드 환경의 인공지능 워크로드 스케줄링)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.63-67
    • /
    • 2024
  • Recently, artificial intelligence (AI) workloads encompassing various industries such as smart logistics, FinTech, and entertainment are being executed on the cloud. In this paper, we address the scheduling issues of various AI workloads on a multi-tenant cloud system composed of heterogeneous GPU clusters. Traditional scheduling decreases GPU utilization in such environments, degrading system performance significantly. To resolve these issues, we present a new scheduling approach utilizing genetic algorithm-based optimization techniques, implemented within a process-based event simulation framework. Trace driven simulations with diverse AI workload traces collected from Alibaba's MLaaS cluster demonstrate that the proposed scheduling improves GPU utilization compared to conventional scheduling significantly.