DOI QR코드

DOI QR Code

GPU Memory Management Technique to Improve the Performance of GPGPU Task of Virtual Machines in RPC-Based GPU Virtualization Environments

RPC 기반 GPU 가상화 환경에서 가상머신의 GPGPU 작업 성능 향상을 위한 GPU 메모리 관리 기법

  • 강지훈 (고려대학교 정보대학 4단계 BK21 컴퓨터학교육연구단)
  • Received : 2020.09.14
  • Accepted : 2020.11.02
  • Published : 2021.05.31

Abstract

RPC (Remote Procedure Call)-based Graphics Processing Unit (GPU) virtualization technology is one of the technologies for sharing GPUs with multiple user virtual machines. However, in a cloud environment, unlike CPU or memory, general GPUs do not provide a resource isolation technology that can limit the resource usage of virtual machines. In particular, in an RPC-based virtualization environment, since GPU tasks executed in each virtual machine are performed in the form of multi-process, the lack of resource isolation technology causes performance degradation due to resource competition. In addition, the GPU memory competition accelerates the performance degradation as the resource demand of the virtual machines increases, and the fairness decreases because it cannot guarantee equal performance between virtual machines. This paper, in the RPC-based GPU virtualization environment, analyzes the performance degradation problem caused by resource contention when the GPU memory requirement of virtual machines exceeds the available GPU memory capacity and proposes a GPU memory management technique to solve this problem. Also, experiments show that the GPU memory management technique proposed in this paper can improve the performance of GPGPU tasks.

RPC(Remote Procedure Call) 기반 GPU(Graphics Processing Unit) 가상화 기술은 다수의 사용자 가상머신에게 GPU를 공유하기 위한 기술 중 하나이다. 하지만 클라우드 환경에서 일반적인 GPU는 CPU나 메모리와는 다르게 가상머신의 자원 사용량을 제한할 수 있는 자원 격리(Isolation) 기술을 제공하지 않는다. 특히 RPC 기반 가상화 환경에서는 각 가상머신에서 실행되는 GPU 작업은 멀티 프로세스 형태로 수행되기 때문에 자원격리 기술의 부재는 자원 경쟁으로 인한 성능 저하 문제를 발생시킨다. 그리고 GPU 메모리 경쟁은 가상머신들의 자원 요구량이 많을수록 성능저하를 가속화하고 가상머신 사이의 균등한 성능을 보장하지 못하기 때문에 공평성이 저하되는 문제를 발생시킨다. 본 논문에서는 RPC 기반 GPU 가상화 환경에서 사용자 가상머신들의 GPU 메모리 요구량이 가용 GPU 메모리 용량을 초과했을 때 발생하는 자원 경쟁으로 인한 성능 저하 문제 분석하고 이를 해결하기 위한 GPU 메모리 관리 기법을 제안한다. 또한, 실험을 통해 본 논문에서 제안한 GPU 메모리 관리 기법이 GPGPU 작업의 성능을 향상시킬 수 있다는 것을 보여준다.

Keywords

References

  1. Amazon, Amazon EC2 Instance Types [Internet]. https://aws.amazon.com/ec2/instance-types/?nc1=f_ls.
  2. Alibaba Cloud, Elastic GPU Service [Internet], https://hpc.aliyun.com/product/gpu_bare_metal.
  3. NVIDIA, NVIDIA GRID [Internet], https://www.nvidia.com/ko-kr/data-center/virtual-gpu-technology/
  4. AMD, AMD Radeon Pro [Internet], https://www.amd.com/ko/graphics/workstation-virtualization-solutions-csp
  5. AMD, OpenCL: Open Computing Language [Internet], https://www.khronos.org/opencl/.
  6. NVIDIA, CUDA: Compute Unified Device Architecture [Internet], http://www.nvidia.com/object/cuda_home_new.html.
  7. NVIDIA, NVIDIA V100 [Internet], https://www.nvidia.com/ko-kr/data-center/v100/
  8. P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, "Xen and the art of virtualization," In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, SOSP '03. ACM: New York, NY, USA, 2003, pp.164-177.
  9. D. Abramson, J. Jackson, S. Muthrasanallur, G. Neiger,G. Regnier, R. Sankaran, I. Schoinas, R. Uhlig, B. Vembu, and J. Wiegert, "Intel virtualization technology for directed I/O," Intel Technology Journal, 2006.
  10. L. Shi, H. Chen, J. Sun, and K. Li, "vCUDA: GPU-accelerated high-performance computing in virtual machines," IEEE Transactions on Computers, Vol.61, No.6, pp.804-816, 2012. https://doi.org/10.1109/TC.2011.112
  11. J. Duato, A. J. Pena, F. Silla, R. Mayo, and E. S. Quintana-Ort, "rCUDA: Reducing the number of GPU-based accelerators in high performance clusters," High Performance Computing and Simulation, pp.224-23, 2010.
  12. J. Kehne, J. Metter, and F. Bellosa, "GPUswap: Enabling oversubscription of GPU memory through transparent swapping," ACM SIGPLAN Notices, Vol.50, No.7, pp.65-77, 2015. https://doi.org/10.1145/2817817.2731192
  13. I. Gelado, "Garland M. Throughput-oriented GPU memory allocation," In Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming, pp.27-37, 2019.
  14. P. Li, X. Hu, D. Chen, J. Brock, H. Luo, E. Z. Zhang, and C. Ding, "LD: Low-overhead GPU race detection without access monitoring," ACM Transactions on Architecture and Code Optimization (TACO), Vol.14, No.9, pp.1-25, 2017.
  15. S. Rai and M. Chaudhuri, "Using criticality of GPU accesses in memory management for CPU-GPU heterogeneous multi-core processors," ACM Transactions on Embedded Computing Systems (TECS), Vol.16, No.5s, pp.1-23, 2017.
  16. R. Ausavarungnirun, V. Miller, J. Landgraf, S. Ghose, J. Gandhi, A. Jog, C. Rossbach, and O. Mutlu, "Mask: Redesigning the gpu memory hierarchy to support multi-application concurrency," ACM SIGPLAN Notices, Vol.53, No.2, pp.503-518, 2018. https://doi.org/10.1145/3296957.3173169
  17. R. Ausavarungnirun, J. Landgraf, V. Miller, S. Ghose, J. Gandhi, C. J. Rossbach, and O. Mutlu, "Mosaic: a GPU memory manager with application-transparent support for multiple page sizes," In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pp.136-150, 2017.
  18. Y. Dong, M. Xue, X. Zheng, J. Wang, Z. Qi, and H. Guan, "Boosting GPU Virtualization Performance with Hybrid Shadow Page Tables," USENIX Annual Technical Conference, pp.517-528, 2015.
  19. M. Xue, K. Tian, Y. Dong, J. Ma, J. Wang, Z. Qi, S. Jiao, B. He, and H. Guan, "gScale: Scaling up GPU Virtualization with Dynamic Sharing of Graphics Memory Space," USENIX Annual Technical Conference, pp.579-590, 2016.