Browse > Article

Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving  

Yoon, Jae Ung (인하대학교 전기컴퓨터공학과)
Lee, Ju Hong (인하대학교 컴퓨터공학과)
Publication Information
Smart Media Journal / v.11, no.9, 2022 , pp. 9-20 More about this Journal
Abstract
Deep reinforcement learning(RL) is an end-to-end data-driven control method that is widely used in the autonomous driving domain. However, conventional RL approaches have difficulties in applying it to autonomous driving tasks due to problems such as inefficiency, instability, and uncertainty. These issues play an important role in the autonomous driving domain. Although recent studies have attempted to solve these problems, they are computationally expensive and rely on special assumptions. In this paper, we propose a new algorithm MCDT that considers inefficiency, instability, and uncertainty by introducing a method called uncertainty sequence modeling to autonomous driving domain. The sequence modeling method, which views reinforcement learning as a decision making generation problem to obtain high rewards, avoids the disadvantages of exiting studies and guarantees efficiency, stability and also considers safety by integrating uncertainty estimation techniques. The proposed method was tested in the OpenAI Gym CarRacing environment, and the experimental results show that the MCDT algorithm provides efficient, stable and safe performance compared to the existing reinforcement learning method.
Keywords
Autonomous Driving; Reinforcement Learning; Sequence Modeling; Uncertainty Estimation;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 A. Eskandarian, W. Chaoxian, "Research advances and challenges of autonomous and connected ground vehicles," IEEE Transactions on Intelligent Transportation Systems, Vol. 22, No. 2, pp. 683-711, Dec. 2019.   DOI
2 E. Yurtsever, J. Lambert, "A survey of autonomous driving: Common practices and emerging technologies," IEEE access, Vol. 8, pp. 58443-58469, Mar. 2020.   DOI
3 S. Kuutti, R. Bowden, "A survey of deep learning applications to autonomous vehicle control," IEEE Transactions on Intelligent Transportation Systems, Vol. 22, No. 2, pp. 712-733, Jan. 2020.
4 W. Yuanqing, L. Siqin, "Deep reinforcement learning on autonomous driving policy with auxiliary critic network," IEEE transactions on neural networks and learning systems, Oct. 2021.
5 G. Chuan, P. Geoff, "On calibration of modern neural networks," In: International conference on machine learning. PMLR, pp. 1321-1330, 2017.
6 Y. Gal, "Uncertainty in deep learning", PhD thesis, University of Cambridge, 2016.
7 A. Pandian, "Challenges in Autonomous Vehicle Development," International Conference on Industrial Engineering and Operations Management, Aug. 2019.
8 A.E. Sallab, M. Abdou, "End-to-end deep reinforcement learning for lane keeping assist," arXiv preprint arXiv:1612.04340, 2016.
9 R.S. Sutton, A.G. Barto, "Reinforcement learning: An introduction," MIT press, 2018.
10 J. Chen, B. Yuan, M. Tomizuka, "Deep imitation learning for autonomous driving in generic urban scenarios with enhanced safety," In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 2884-2890, Macau, China, Nov. 2019.
11 H. Gao, G. Shi, G. Xie, "Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making," International Journal of Advanced Robotic Systems, Vol. 15, No. 6, Dec. 2018.
12 Z. Cao, E. Biyik, "Reinforcement learning based control of imitative policies for near-accident driving," arXiv preprint arXiv:2007.00178, 2020.
13 H. Liu, Z. Huang, J. Wu, "Improved deep reinforcement learning with expert demonstrations for urban autonomous driving," In 2022 IEEE Intelligent Vehicles Symposium (IV), pp. 921-928, Aachen, Germany, Jun. 2022.
14 S. Chen, M. Wang, W. Song and Y. Yang, "Stabilization approaches for reinforcement learning- based end-to-end autonomous driving," IEEE Transactions on Vehicular Technology, Vol. 69, No. 5, pp. 4740-4750, Mar. 2020.   DOI
15 Z. Qiao, Z. Tyree, "Hierarchical reinforcement learning method for autonomous vehicle behavior planning," In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6084-6089, Las Vegas, USA, Oct. 2020.
16 G. Brockman, V. Cheung, L. Pettersson, J. Schneider, "OpenAI Gym," arXiv preprint arXiv:1606.01540, 2016.
17 B. Gangopadhyay, P. Dasgupta, "Safe and Stable RL (S2RL) Driving Policies Using Control Barrier and Control Lyapunov Functions," IEEE Transactions on Intelligent Vehicles, Mar. 2022.
18 K. Min, H. Kim, and K. Huh, "Deep distributional reinforcement learning based high-level driving policy determination," IEEE Transactions on Intelligent Vehicles, Vol. 4, No. 3, pp. 416-424, May, 2019.   DOI
19 C.J. Hoel, K. Wolff, and L. Laine, "Tactical decision-making in autonomous driving by reinforcement learning with uncertainty estimation," In 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 1563-1569, Las Vegas, USA, Oct. 2020.
20 B. Lutjens, M. Everett, and J.P. How, "Safe reinforcement learning with model uncertainty estimates," In 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp. 8662-8668, Montreal, Canada, May, 2019.
21 M. Janner, Q. Li and S. Levine, "Reinforcement learning as one big sequence modeling problem," In ICML 2021 Workshop on Unsupervised Reinforcement Learning, Jun. 2021.
22 L. Chen, K. Lu, A. Rajeswaran and K. Lee, "Decision transformer: Reinforcement learning via sequence modeling," Advances in neural information processing systems, pp. 15084-15097, 2021.
23 Y. Gal, Z. Ghahramani, "Dropout as a bayesian approximation: Representing model uncertainty in deep learning," In international conference on machine learning. PMLR, pp. 1050-1059, 2016.
24 T. Shi, P. Wang, X. Cheng, "Driving decision and control for automated lane change behavior based on deep reinforcement learning," In 2019 IEEE intelligent transportation systems conference (ITSC), pp. 2895-2900, Auckland, New Zealand, Oct. 2019.
25 김영광, 김진술, "자율주행에서 이미지 객체 분할을 위한 강화된 DFCN 알고리즘 성능 연구," 스마트미디어저널, 제9권, 제4호, 9-16쪽, 2020년 12월
26 H. N. Quach, H. J. Jo, S. W. Yeom, K. B. Kim, "Link Stability aware Reinforcement Learning based Network Path Planning," Smart Media Journal, Vol. 11, No. 5, pp. 82-90, Jun. 2022.
27 B. Lakshminarayanan, A. Pritzel, "Simple and scalable predictive uncertainty estimation using deep ensembles," NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6405-6416 Dec. 2017.
28 김재상, 문해민, 반성범, "오픈소스 하드웨어 기반 차선검출 기술에 대한 연구," 스마트미디어저널, 제6권, 제3호, 15-20쪽, 2017년 9월
29 A.V. Bernstein, E.V. Burnaev, "Reinforcement learning for computer vision and robot navigation," In: International Conference on Machine Learning and Data Mining in Pattern Recognition. Springer, Cham, pp. 258-272, Jul. 2018.