DOI QR코드

DOI QR Code

Reinforcement Learning based Autonomous Emergency Steering Control in Virtual Environments

가상 환경에서의 강화학습 기반 긴급 회피 조향 제어

  • Lee, Hunki (Department of Mechanical Engineering, Sungkyunkwan University) ;
  • Kim, Taeyun (Department of Mechanical Engineering, Sungkyunkwan University) ;
  • Kim, Hyobin (Department of Mechanical Engineering, Sungkyunkwan University) ;
  • Hwang, Sung-Ho (Department of Mechanical Engineering, Sungkyunkwan University)
  • Received : 2022.11.08
  • Accepted : 2022.11.28
  • Published : 2022.12.01

Abstract

Recently, various studies have been conducted to apply deep learning and AI to various fields of autonomous driving, such as recognition, sensor processing, decision-making, and control. This paper proposes a controller applicable to path following, static obstacle avoidance, and pedestrian avoidance situations by utilizing reinforcement learning in autonomous vehicles. For repetitive driving simulation, a reinforcement learning environment was constructed using virtual environments. After learning path following scenarios, we compared control performance with Pure-Pursuit controllers and Stanley controllers, which are widely used due to their good performance and simplicity. Based on the test case of the KNCAP test and assessment protocol, autonomous emergency steering scenarios and autonomous emergency braking scenarios were created and used for learning. Experimental results from zero collisions demonstrated that the reinforcement learning controller was successful in the stationary obstacle avoidance scenario and pedestrian collision scenario under a given condition.

Keywords

Acknowledgement

본 연구는 국토교통부/국토교통과학기술진흥원 교통물류연구사업의 연구비지원 (22TLRP-C152478-04)과 과학기술정보통신부 및 정보통신기획평가원의 대학ICT연구센터육성지원사업의 연구결과로 수행된 결과물입니다. (IITP-2022-2018-0-01426)

References

  1. C. Y. Chan, "Advancements, prospects, and impacts of automated driving systems," International journal of transportation science and technology, Vol.6, No.3, pp.208-216, 2017. https://doi.org/10.1016/j.ijtst.2017.07.008
  2. L. Liangzhi, K. Ota and M. Dong, "Humanlike driving: Empirical decision-making system for autonomous vehicles," IEEE Transactions on Vehicular Technology, Vol.67, No.8, pp.6814-6823, 2018. https://doi.org/10.1109/tvt.2018.2822762
  3. Y. Chen, C. Dong, P Palanisamy, P. Mudalige, K. Muelling and J. M. Dolan, "Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  4. X. Liang, T. Wang, L. Yang and E. Xing, "Cirl: Controllable imitative reinforcement learning for vision-based self-driving," Proceedings of the European conference on computer vision (ECCV), pp-584-599, 2018.
  5. K. S. Kim, J. I. Lee, S. W. Gwak, W. Y. Kang, D. Y. Shin and S. H. Hwang, "Construction of Database for Deep Learning-based Occlusion Area Detection in the Virtual Environment," Journal of Drive and Control, Vol.19, No.3, pp.9-15, 2022. https://doi.org/10.7839/KSFC.2022.19.3.009
  6. J. I. Lee, G. S. Gwak, K. S. Kim, W. Y. Kang, D. Y. Shin and S. H. Hwang, "Development of Virtual Simulator and Database for Deep Learning-based Object Detection," Journal of Drive and Control, Vol.18, No.4, pp.9-18, 2021. https://doi.org/10.7839/KSFC.2021.18.4.009
  7. S. Wang, D. Jia and X. Weng, "Deep reinforcement learning for autonomous driving," arXiv:1811.11329, 2018.
  8. J. Chen, B. Yuan and M. Tomizuka, "Model-free deep reinforcement learning for urban autonomous driving," 2019 IEEE intelligent transportation systems conference (ITSC), 2019.
  9. C. Desjardins and B. Chaib-draa. "Cooperative adaptive cruise control: A reinforcement learning approach," IEEE Transactions on intelligent transportation systems, Vol.12, No.4, pp.1248-1260, 2021. https://doi.org/10.1109/TITS.2011.2157145
  10. A. Folkers, M. Rick and C. Buskens, "Controlling an autonomous vehicle with deep reinforcement learning," 2019 IEEE Intelligent Vehicles Symposium (IV), 2019.
  11. O. P. Gil, R. Barea, E. L. Guillen, L. M. Bergasa, C. G. Huelamo, R. Gutierrez and A. D. Diaz, "Deep reinforcement learning based control for autonomous vehicles in carla," Multimedia Tools and Applications, Vol.81, No.3, pp.3553-3576, 2022. https://doi.org/10.1007/s11042-021-11437-3
  12. A. Feher, S. Aradi and T, Becsi, "Online Trajectory Planning with Reinforcement Learning for Pedestrian Avoidance," Electronics, Vol.11, No.15, 2022.
  13. M. Yoshimura, G. Fujimoto, A. Kaushik, B. K. Padi, M. Dennison, I. Sood, K. Sarkar, A. Muneer, "Autonomous Emergency Steering Using Deep Reinforcement Learning For Advanced Driver Assistance System," 2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2020.
  14. R. S. Sutton, D. McAllester, S. Singh, Y. Mansour, "Policy gradient methods for reinforcement learning with function approximation," Advances in neural information processing systems, Vol.12, 1999.
  15. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, "Proximal policy optimization algorithms," arXiv:1707.06347, 2017.
  16. D. Y. Yu, D. G. Kim, H. S. Choi and S. H. Hwang, "Hybrid Control Strategy for Autonomous Driving System using HD Map Information," Journal of Drive and Control, Vol.17, No.4, pp.80-86, 2020. https://doi.org/10.7839/KSFC.2020.17.4.080
  17. A. Kesting, M. Treiber and D, Helbing. "Enhanced intelligent driver model to access the impact of driving strategies on traffic capacity," Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol.368, No.1928, pp-4585-4605, 2010. https://doi.org/10.1098/rsta.2010.0084