Browse > Article
http://dx.doi.org/10.12815/kits.2020.19.5.162

Development of Autonomous Vehicle Learning Data Generation System  

Yoon, Seungje (Mobility Research and Artificial Intelligence)
Jung, Jiwon (Mobility Research and Artificial Intelligence)
Hong, June (Mobility Research and Artificial Intelligence)
Lim, Kyungil (Advanced Institutes of Convergence Technology)
Kim, Jaehwan (Advanced Institutes of Convergence Technology)
Kim, Hyungjoo (Advanced Institutes of Convergence Technology)
Publication Information
The Journal of The Korea Institute of Intelligent Transport Systems / v.19, no.5, 2020 , pp. 162-177 More about this Journal
Abstract
The perception of traffic environment based on various sensors in autonomous driving system has a direct relationship with driving safety. Recently, as the perception model based on deep neural network is used due to the development of machine learning/in-depth neural network technology, a the perception model training and high quality of a training dataset are required. However, there are several realistic difficulties to collect data on all situations that may occur in self-driving. The performance of the perception model may be deteriorated due to the difference between the overseas and domestic traffic environments, and data on bad weather where the sensors can not operate normally can not guarantee the qualitative part. Therefore, it is necessary to build a virtual road environment in the simulator rather than the actual road to collect the traning data. In this paper, a training dataset collection process is suggested by diversifying the weather, illumination, sensor position, type and counts of vehicles in the simulator environment that simulates the domestic road situation according to the domestic situation. In order to achieve better performance, the authors changed the domain of image to be closer to due diligence and diversified. And the performance evaluation was conducted on the test data collected in the actual road environment, and the performance was similar to that of the model learned only by the actual environmental data.
Keywords
Autonomous simulation; Real road dataset; Virtual environment dataset; AI learning data;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Gaidon A., Wang Q., Cabon Y. and Vig E.(2016), "Virtual Worlds as Proxy for Multi-Object Tracking Analysis," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.4340-4349.
2 Geiger A., Lenz P., Stller C. and Urtasun R.(2013), "Vision meets Rototics: The KITTI Dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp.1231-1237.   DOI
3 Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A. and Bengio Y.(2014), "Generative Adversarial Networks," NIPS.
4 Isola P., Zhu J. Y., Zhou T. and Efros A.(2017), "Image-to-Image Translation with Conditional Adversarial Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.1125-1134.
5 Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C. and Berg A.(2015), "SSD: Single Shot MultiBox Detector," European Conference on Computer Vision(ECCV), pp.21-37.
6 Redmon J. and Farhadi A.(2016), "YOLO9000: Better, Faster, Stronger," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.7263-7271.
7 Redmon J. and Farhadi A.(2019), YOLO v3: An Incremental Improvement, University of Washington.
8 Redmon J., Divvala S., Grishick R. and Farhadi A.(2015), "You Only Look Once: Unified, Real-Time Object Detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.779-788.
9 Ren S., He K., Girshick R. and Sun J.(2016), "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," In IEEE Transactions on Pattern Analysis and Machine Intelligence.
10 Rong G., Shin B. H., Tabatabaee H., Lu Q., Lemke S., Mozeiko M., Boise E., Uhm G., Gerow M., Mehta S., Agafonov E., Kim T. H., Sterner E., Ushiroda K., Reyes M., Zelenkovsky D. and Kim S.(2020), "LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving," ITSC.
11 Tremblay J., Prakash A., Acuna D., Brophy M., Jampani V., Anil C., To T., Cameracci E., Boochoon S. and Birchfield S.(2018), "Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization," In CVPR Workshop.
12 Yang Z., Chai Y., Anguelov D., Zhou Y., Sun P., Erhan D., Rafferty S. and Kretzschmar H.(2020), "SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp.11118-11127.
13 Yu F., Chen H., Wang X., Xian W., Chen Y., Liu F., Madhavan V. and Darrell T.(2020), "BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp.2636-2645.
14 Dosovitskiy A., Ros G., Codevilla F., Lopez A. and Koltun V.(2017), "CARLA: An Open Urban Driving Simulator," Conference on Robot Learning(CoRL).
15 Zhu J. Y., Park T., Isola P. and Efros A.(2017), "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," Proceedings of the IEEE International Conference on Computer Vision(ICCV), pp.2223-2232.
16 Chen D., Zhou B., Koltun V. and Krahenbuhl P.(2019), "Learning by Cheating," Conference on Robot Learning(CoRL).