DOI QR코드

DOI QR Code

Real-Time GPU Task Monitoring and Node List Management Techniques for Container Deployment in a Cluster-Based Container Environment

클러스터 기반 컨테이너 환경에서 실시간 GPU 작업 모니터링 및 컨테이너 배치를 위한 노드 리스트 관리기법

  • 강지훈 (고려대학교 4단계 BK21 컴퓨터학교육연구단) ;
  • 길준민 (대구가톨릭대학교 컴퓨터소프트웨어학부)
  • Received : 2022.08.01
  • Accepted : 2022.08.12
  • Published : 2022.11.30

Abstract

Recently, due to the personalization and customization of data, Internet-based services have increased requirements for real-time processing, such as real-time AI inference and data analysis, which must be handled immediately according to the user's situation or requirement. Real-time tasks have a set deadline from the start of each task to the return of the results, and the guarantee of the deadline is directly linked to the quality of the services. However, traditional container systems are limited in operating real-time tasks because they do not provide the ability to allocate and manage deadlines for tasks executed in containers. In addition, tasks such as AI inference and data analysis basically utilize graphical processing units (GPU), which typically have performance impacts on each other because performance isolation is not provided between containers. And the resource usage of the node alone cannot determine the deadline guarantee rate of each container or whether to deploy a new real-time container. In this paper, we propose a monitoring technique for tracking and managing the execution status of deadlines and real-time GPU tasks in containers to support real-time processing of GPU tasks running on containers, and a node list management technique for container placement on appropriate nodes to ensure deadlines. Furthermore, we demonstrate from experiments that the proposed technique has a very small impact on the system.

최근 인터넷 기반 서비스는 데이터의 개인화 및 맞춤화로 인해 사용자의 상황이나 요구사항에 따라 즉시 처리해야 하는 실시간 AI 추론 및 데이터 분석과 같은 실시간 처리에 대한 요구사항이 증가하고 있다. 실시간 작업은 각 작업이 시작되고 결과를 반환하기까지의 데드라인이 정해져 있으며, 데드라인의 보장은 서비스의 품질과 직접적으로 연결된다. 하지만, 기존 컨테이너 시스템에서는 컨테이너에서 실행되는 작업의 데드라인을 할당하고 관리하기 위한 기능이 제공되지 않기 때문에 실시간 작업을 운용하는데 제한적이다. 또한, AI 추론 및 데이터 분석과 같은 작업은 GPU(Graphic Processing Unit)를 기본적으로 사용하는데, 일반적으로 GPU 자원은 컨테이너 사이에 성능 격리가 제공되지 않기 때문에 서로 성능 영향을 미치며, 노드의 자원 사용량만으로는 각 컨테이너의 데드라인 보장률이나 새로운 실시간 컨테이너의 배치 여부를 결정할 수 없다. 따라서, 본 논문에서는 컨테이너에서 실행되는 GPU 작업의 실시간 처리를 지원하기 위해 컨테이너의 데드라인 및 실시간 GPU 작업의 실행 상태를 추적하고 관리하기 위한 모니터링 기법과 클러스터 환경에서 실시간 GPU 작업을 실행하는 컨테이너가 데드라인을 보장할 수 있도록 적절한 노드에 배치하기 위한 노드 리스트 관리기법을 제안한다. 또한, 실험을 통해 제안하는 기법이 시스템에 매우 작은 영향을 미친다는 것을 증명한다.

Keywords

Acknowledgement

이 논문은 2022년도 정부(교육부)의 재원(2022R1I1A1A01063551)과 2019년도 정부(과학기술정보통신부)의 재원(NRF-2019R1F1A1062039)으로 한국연구재단의 지원을 받아 수행된 연구임.

References

  1. J. H. Kang and J. M. Gil, "Deadline information management techniques to support real-time GPU tasks in container-based cloud environments," Proceedings of the Annual Spring Conference of Korea Information Processing Society Conference (KIPS) 2022, Vol.29, No.1, pp.56-59, 2022.
  2. Docker, Docker [Internet], https://www.docker.com/
  3. NVIDIA, Compute Unified Device Architecture (CUDA) [Internet], https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
  4. NVIDIA, NVIDIA Docker [Internet], https://github.com/NVIDIA/nvidia-docker
  5. Docker, docker ps [Internet], https://docs.docker.com/engine/reference/commandline/ps/
  6. Docker, docker stats [Internet], https://docs.docker.com/engine/reference/commandline/stats/
  7. Docker, docker top [Internet], https://docs.docker.com/engine/reference/commandline/top/
  8. The Linux Foundation, Prometheus [Internet], https://prometheus.io/
  9. Google, cAdvisor [Internet], https://hub.docker.com/r/google/cadvisor/
  10. NVIDIA, NVIDIA Docker [Internet], https://github.com/NVIDIA/nvidia-docker
  11. NVIDIA, NVIDIA System Management Interface [Internet], https://developer.nvidia.com/nvidia-system-management-interface
  12. J. Ru, Y. Yang, J. Grundy, J. Keung, and L. Hao, "An efficient deadline constrained and data locality aware dynamic scheduling framework for multitenancy clouds," Concurrency and Computation: Practice and Experience, Vol.33, No.5, e6037, 2021.
  13. J. Lou, Z. Tang, S. Zhang, W. Jia, W. Zhao, and J. Li, "Cost-effective scheduling for dependent tasks with tight deadline constraints in mobile edge computing," IEEE Transactions on Mobile Computing (Early Access), 2022.
  14. H. Xia, M. Liu, Y. Chen, X. Jin, Z. Wang, and F. Wang, "A load balancing strategy of container virtual machine cloud microservice based on deadline limit," 14th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp.998-1002, 2022.
  15. V. Struhar, S. S. Craciunas, M. Ashjaei, M. Behnam, and A. V. Papadopoulos, "React: Enabling real-time container orchestration," 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp.1-8, 2021.
  16. C. Singh, P. Kumari, R. Mishra, H. P. Gupta, and T. Dutta, "Secure industrial iot task containerization with deadline constraint: A stackelberg game approach," IEEE Transactions on Industrial Informatics (Early Access), 2022.
  17. L. Ye, Y. Xia, L. Yang, and C. Yan, "SHWS: Stochastic Hybrid Workflows Dynamic Scheduling in Cloud Container Services", IEEE Transactions on Automation Science and Engineering, Vol.19, No.3, pp.2620-2636, 2021.
  18. K. Dubey and S. C. Sharma, "A novel multi-objective CRPSO task scheduling algorithm with deadline constraint in cloud computing," Sustainable Computing: Informatics and Systems, Vol.32, 100605, 2021. https://doi.org/10.1016/j.suscom.2021.100605
  19. Google Brain, Tensorflow [Internet], https://www.tensorflow.org/
  20. Yann LeCun, Corinna Cortes, and Chris Burges, MNIST handwritten digit database [Internet], http://yann.lecun.com/exdb/mnist/
  21. Docker, Docker CLI [Internet], https://docs.docker.com/engine/reference/commandline/pause/