Browse > Article
http://dx.doi.org/10.9723/jksiis.2021.26.4.011

Indoor Autonomous Driving through Parallel Reinforcement Learning of Virtual and Real Environments  

Jeong, Yuseok (군산대학교 컴퓨터정보공학과)
Lee, Chang Woo (군산대학교 컴퓨터정보공학과)
Publication Information
Journal of Korea Society of Industrial Information Systems / v.26, no.4, 2021 , pp. 11-18 More about this Journal
Abstract
We propose a method that combines learning in a virtual environment and a real environment for indoor autonomous driving through reinforcement learning. In case of learning only in the real environment, it takes about 80 hours, but in case of learning in both the real and virtual environments, it takes 40 hours. There is an advantage in that it is possible to obtain optimized parameters through various experiments through fast learning while learning in a virtual environment and a real environment in parallel. After configuring a virtual environment using indoor hallway images, prior learning was carried out on the desktop, and learning in the real environment was conducted by connecting various sensors based on Jetson Xavier. In addition, in order to solve the accuracy problem according to the repeated texture of the indoor corridor environment, it was possible to determine the corridor wall object and increase the accuracy by learning the feature point detection that emphasizes the lower line of the corridor wall. As the learning progresses, the experimental vehicle drives based on the center of the corridor in an indoor corridor environment and moves through an average of 70 steering commands.
Keywords
Deep Learning; Virtual Environment; Real Environment; Parallel Learning; Sensor; Feature Point Detection;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Mnih, V., et al. (2013). Playing Atari with Deep Reinforcement Learning, NIPS Deep Learning Workshop 2013.
2 Szegedy, Christian, et al. (2015). Going deeper with convolutions, IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
3 Yang, J., Coughlin, J. F. (2014). In-vehicle technology for self-driving cars: Advantages and challenges for aging drivers, International Journal of Automotive Technology, vol.15, no.2, pp333-340.   DOI
4 Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, Nando de Freitas. (2016). Dueling Network Architectures for Deep Reinforcement Learning, Proceedings of the 33rd International 35 Conference on International Conference on Machine Learning (PMLR), Vol.48, pp1995-2003.
5 Simonyan, Karen, and Andrew Zisserman. (2015). Very deep convolutional networks for large-scale image recognition, arXivpreprint arXiv:1409.1556v6.
6 Lillicrap, Timothy P., et al. (2019). Continuous control with deep reinforcement learning, arXiv preprint arXiv:1509.02971v6.
7 Krizhevsky, A., et all. (2012). Imagenet Classification with Deep Convolutional Neural Networks, NIPS 2012.
8 Kim, J. Y., and Lee, S. J., et al. (2017). A Study on the National Policy Agenda based on Science & Tehnology.ICT for leading the 4th Industrial Revolution, Ministry of Science, ICT and Future Planning
9 Commercializations Promotion Agency for R&D Outcomes. (2019). Artificial intelligence (big data) market and technology trends, S&T Market Report, Vol. 71
10 Keskar, N. S., et all. (2017). on Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, ICLR 2017.