Browse > Article
http://dx.doi.org/10.7839/ksfc.2022.19.4.110

Reinforcement Learning based Autonomous Emergency Steering Control in Virtual Environments  

Lee, Hunki (Department of Mechanical Engineering, Sungkyunkwan University)
Kim, Taeyun (Department of Mechanical Engineering, Sungkyunkwan University)
Kim, Hyobin (Department of Mechanical Engineering, Sungkyunkwan University)
Hwang, Sung-Ho (Department of Mechanical Engineering, Sungkyunkwan University)
Publication Information
Journal of Drive and Control / v.19, no.4, 2022 , pp. 110-116 More about this Journal
Abstract
Recently, various studies have been conducted to apply deep learning and AI to various fields of autonomous driving, such as recognition, sensor processing, decision-making, and control. This paper proposes a controller applicable to path following, static obstacle avoidance, and pedestrian avoidance situations by utilizing reinforcement learning in autonomous vehicles. For repetitive driving simulation, a reinforcement learning environment was constructed using virtual environments. After learning path following scenarios, we compared control performance with Pure-Pursuit controllers and Stanley controllers, which are widely used due to their good performance and simplicity. Based on the test case of the KNCAP test and assessment protocol, autonomous emergency steering scenarios and autonomous emergency braking scenarios were created and used for learning. Experimental results from zero collisions demonstrated that the reinforcement learning controller was successful in the stationary obstacle avoidance scenario and pedestrian collision scenario under a given condition.
Keywords
Autonomous Driving; Reinforcement Learning; Virtual Environment; Autonomous Emergency Steering; Autonomous Emergency Braking;
Citations & Related Records
연도 인용수 순위
  • Reference
1 R. S. Sutton, D. McAllester, S. Singh, Y. Mansour, "Policy gradient methods for reinforcement learning with function approximation," Advances in neural information processing systems, Vol.12, 1999.
2 L. Liangzhi, K. Ota and M. Dong, "Humanlike driving: Empirical decision-making system for autonomous vehicles," IEEE Transactions on Vehicular Technology, Vol.67, No.8, pp.6814-6823, 2018.   DOI
3 A. Folkers, M. Rick and C. Buskens, "Controlling an autonomous vehicle with deep reinforcement learning," 2019 IEEE Intelligent Vehicles Symposium (IV), 2019.
4 M. Yoshimura, G. Fujimoto, A. Kaushik, B. K. Padi, M. Dennison, I. Sood, K. Sarkar, A. Muneer, "Autonomous Emergency Steering Using Deep Reinforcement Learning For Advanced Driver Assistance System," 2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2020.
5 D. Y. Yu, D. G. Kim, H. S. Choi and S. H. Hwang, "Hybrid Control Strategy for Autonomous Driving System using HD Map Information," Journal of Drive and Control, Vol.17, No.4, pp.80-86, 2020.   DOI
6 Y. Chen, C. Dong, P Palanisamy, P. Mudalige, K. Muelling and J. M. Dolan, "Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
7 K. S. Kim, J. I. Lee, S. W. Gwak, W. Y. Kang, D. Y. Shin and S. H. Hwang, "Construction of Database for Deep Learning-based Occlusion Area Detection in the Virtual Environment," Journal of Drive and Control, Vol.19, No.3, pp.9-15, 2022.   DOI
8 J. I. Lee, G. S. Gwak, K. S. Kim, W. Y. Kang, D. Y. Shin and S. H. Hwang, "Development of Virtual Simulator and Database for Deep Learning-based Object Detection," Journal of Drive and Control, Vol.18, No.4, pp.9-18, 2021.   DOI
9 C. Desjardins and B. Chaib-draa. "Cooperative adaptive cruise control: A reinforcement learning approach," IEEE Transactions on intelligent transportation systems, Vol.12, No.4, pp.1248-1260, 2021.   DOI
10 O. P. Gil, R. Barea, E. L. Guillen, L. M. Bergasa, C. G. Huelamo, R. Gutierrez and A. D. Diaz, "Deep reinforcement learning based control for autonomous vehicles in carla," Multimedia Tools and Applications, Vol.81, No.3, pp.3553-3576, 2022.   DOI
11 A. Feher, S. Aradi and T, Becsi, "Online Trajectory Planning with Reinforcement Learning for Pedestrian Avoidance," Electronics, Vol.11, No.15, 2022.
12 J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, "Proximal policy optimization algorithms," arXiv:1707.06347, 2017.
13 A. Kesting, M. Treiber and D, Helbing. "Enhanced intelligent driver model to access the impact of driving strategies on traffic capacity," Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol.368, No.1928, pp-4585-4605, 2010.   DOI
14 J. Chen, B. Yuan and M. Tomizuka, "Model-free deep reinforcement learning for urban autonomous driving," 2019 IEEE intelligent transportation systems conference (ITSC), 2019.
15 C. Y. Chan, "Advancements, prospects, and impacts of automated driving systems," International journal of transportation science and technology, Vol.6, No.3, pp.208-216, 2017.   DOI
16 X. Liang, T. Wang, L. Yang and E. Xing, "Cirl: Controllable imitative reinforcement learning for vision-based self-driving," Proceedings of the European conference on computer vision (ECCV), pp-584-599, 2018.
17 S. Wang, D. Jia and X. Weng, "Deep reinforcement learning for autonomous driving," arXiv:1811.11329, 2018.