DOI QR코드

DOI QR Code

Computation Offloading with Resource Allocation Based on DDPG in MEC

  • Sungwon Moon (Dept. of IT Engineering, Sookmyung Women's University) ;
  • Yujin Lim (Dept. of IT Engineering, Sookmyung Women's University)
  • Received : 2022.08.08
  • Accepted : 2022.09.19
  • Published : 2024.04.30

Abstract

Recently, multi-access edge computing (MEC) has emerged as a promising technology to alleviate the computing burden of vehicular terminals and efficiently facilitate vehicular applications. The vehicle can improve the quality of experience of applications by offloading their tasks to MEC servers. However, channel conditions are time-varying due to channel interference among vehicles, and path loss is time-varying due to the mobility of vehicles. The task arrival of vehicles is also stochastic. Therefore, it is difficult to determine an optimal offloading with resource allocation decision in the dynamic MEC system because offloading is affected by wireless data transmission. In this paper, we study computation offloading with resource allocation in the dynamic MEC system. The objective is to minimize power consumption and maximize throughput while meeting the delay constraints of tasks. Therefore, it allocates resources for local execution and transmission power for offloading. We define the problem as a Markov decision process, and propose an offloading method using deep reinforcement learning named deep deterministic policy gradient. Simulation shows that, compared with existing methods, the proposed method outperforms in terms of throughput and satisfaction of delay constraints.

Keywords

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1F1A1047113). This paper is the extended version of the Annual Spring Conference of KIPS (ASK 2022) held in Seoul, Republic of Korea dated May 19-21, 2022 [13].

References

  1. A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, "Internet of Things: a survey on enabling technologies, protocols, and applications," IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2347-2376, 2015. https://doi.org/10.1109/COMST.2015.2444095
  2. S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, "Edge computing for autonomous driving: opportunities and challenges," Proceedings of the IEEE, vol. 107, no. 8, pp. 1697-1716, 2019. https://doi.org/10.1109/JPROC.2019.2915983
  3. Q. Wu, H. Liu, R. Wang, P. Fan, Q. Fan, and Z. Li, "Delay-sensitive task offloading in the 802.11p-based vehicular fog computing systems," IEEE Internet of Things Journal, vol. 7, no. 1, pp. 773-785, 2020. https://doi.org/10.1109/JIOT.2019.2953047
  4. K. Zhang, Y. Zhu, S. Leng, Y. He, S. Maharjan, and Y. Zhang, "Deep Learning empowered task offloading for mobile edge computing in urban informatics," IEEE Internet of Things Journal, vol. 6, no. 5, pp. 7635-7647, 2019. https://doi.org/10.1109/JIOT.2019.2903191
  5. Y. Dai, D. Xu, S. Maharjan, and Y. Zhang, "Joint load balancing and offloading in vehicular edge computing and networks," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4377-4387, 2019. https://doi.org/10.1109/JIOT.2018.2876298
  6. Y. Liu, H. Yu, S. Xie, and Y. Zhang, "Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks," IEEE Transactions on Vehicular Technology, vol. 68, no. 11, pp. 11158-11168, 2019. https://doi.org/10.1109/TVT.2019.2935450
  7. A. Sadiki, J. Bentahar; R. Dssouli and A. En-Nouaary, "Deep reinforcement learning for the computation offloading in MIMO-based edge computing," Ad Hoc N etworks, vol. 141, article no. 103080, 2023. https://doi.org/10.1016/j.adhoc.2022.103080
  8. X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji, and M. Bennis, "Performance optimization in mobile-edge computing via deep reinforcement learning," in Proceedings of 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 2018, pp. 1-6. https://doi.org/10.1109/VTCFall.2018.8690980
  9. Z. Cheng, M. Min, M. Liwang, L. Huang, and G. Zhibin, "Multi-agent DDPG-based joint task partitioning and power control in fog computing networks," IEEE Internet of Things Journal, vol. 9, no. 1, pp. 104-116, 2022. https://doi.org/10.1109/JIOT.2021.3091508
  10. M. Li, J. Gao, L. Zhao, and X. Shen, "Deep reinforcement learning for collaborative edge computing in vehicular networks," IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 4, pp. 1122-1135, 2020. https://doi.org/10.1109/TCCN.2020.3003036
  11. J. Ren and S. Xu, "DDPG based computation offloading and resource allocation for MEC Systems with energy harvesting," in Proceedings of 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 2021, pp. 1-5. https://doi.org/10.1109/VTC2021-Spring51267.2021.9448922
  12. X. Chen, H. Ge, L. Liu, S. Li, J. Han, and H. Gong, "Computing offloading decision based on DDPG algorithm in mobile edge computing," in Proceedings of 2021 IEEE 6th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA), Chengdu, China, 2021, pp. 391-399. https://doi.org/10.1109/ICCCBDA51879.2021.9442599
  13. S. Moon and Y. Lim, "Performance Comparison of Deep Reinforcement Learning based Computation Offloading in MEC," Proceedings of Annual Conference of KIPS, vol. 29, no. 1, pp. 52-55, 2022.
  14. K. Jiang, H. Zhou, D. Li, X. Liu, and S. Xu, "A Q-learning based method for energy-efficient computation offloading in mobile edge computing," in Proceedings of 2020 29th International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 2020, pp. 1-7. https://doi.org/10.1109/ICCCN49398.2020.9209738
  15. B. Dab, N. Aitsaadi, and R. Langar, "Q-learning algorithm for joint computation offloading and resource allocation in edge cloud," in Proceedings of 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), Arlington, VA, USA, Apr. 2019, pp. 45-52.
  16. H. Zhu, Q. Wu, X. J. Wu, Q. Fan, P. Fan, and J. Wang, "Decentralized power allocation for MIMO-NOMA vehicular edge computing based on deep reinforcement learning," IEEE Internet of Things Journal, vol. 9, no. 14, pp. 12770-12782, 2022. https://doi.org/10.1109/JIOT.2021.3138434
  17. S. E. Mahmoodi, R. N. Uma, and K. P. Subbalakshmi, "Optimal joint scheduling and cloud offloading for mobile applications," IEEE Transactions on Cloud Computing, vol. 7, no. 2, pp. 301-313, 2019. https://doi.org/10.1109/TCC.2016.2560808
  18. Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, "Mobile-edge computing: partial computation offloading using dynamic voltage scaling," IEEE Transactions on Communications, vol. 64, no. 10, pp. 4268-4282, 2016. https://doi.org/10.1109/TCOMM.2016.2599530
  19. S. Raza, M. A. Mirza, S. Ahmad, M. Asif, M. B. Rasheed, and Y. Ghadi, "A Vehicle to vehicle relay-based task offloading scheme in vehicular communication networks," PeerJ Computer Science, vol. 7, article no. e486, 2021. https://doi.org/10.7717/peerj-cs.486
  20. P. A. Lopez, M. Behrisch, L. B. Walz, J. Erdmann, Y. P. Flotterod, R. Hilbrich, L. Lucken, J. Rummel, P. Wagner, and E. Wiessner, "Microscopic traffic simulation using SUMO," in Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 2018, pp. 2575-2582. https://doi.org/10.1109/ITSC.2018.8569938