Browse > Article
http://dx.doi.org/10.22680/kasa2021.13.4.007

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment  

Kim, Yujin (서울대학교 차량 동역학 및 제어 연구실)
Lee, Hojun (서울대학교 차량 동역학 및 제어 연구실)
Yi, Kyongsu (서울대학교 차량 동역학 및 제어 연구실)
Publication Information
Journal of Auto-vehicle Safety Association / v.13, no.4, 2021 , pp. 7-13 More about this Journal
Abstract
This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.
Keywords
Autonomous driving; Sensor fusion; You Only Look Once:YOLO; Vulnerable road user; Geometric Model-Free Tracking;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Banerjee Koyel, et al., 2018, "Online camera lidar fusion and object detection on hybrid data for autonomous driving", IEEE Intelligent Vehicles Symposium (IV).
2 Cho Hyunggi, et al., 2014, "A multi-sensor fusion system for moving object detection and tracking in urban driving environments", IEEE International Conference on Robotics and Automation (ICRA).
3 Chavez-Garcia, Ricardo Omar, and Olivier Aycard, 2015, "Multiple sensor fusion and classification for moving object detection and tracking", IEEE Transactions on Intelligent Transportation Systems 17.2: pp.525~534.   DOI
4 Wang, Dominic Zeng, Ingmar Posner, and Paul Newman, "Model-free detection and tracking of dynamic objects with 2D lidar", The International Journal of Robotics Research 34.7: pp. 1039~1063.   DOI
5 Gao Hongbo, et al., 2018, "Object classifica tion using CNN-based fusion of vision and LIDAR in autonomous vehicle environment", IEEE Transactions on Industrial Informatics 14.9: pp. 4224~4231.   DOI
6 Thuy, Michael, and Fernando Puente Leon, 2009, "Non-linear, shape independent object tracking based on 2d lidar data", IEEE Intelligent Vehicles Symposium.
7 Johnsen, Swantje, and Ashley Tews, 2009, "Realtime object tracking and classification using a static camera", Proceedings of IEEE International Conference on Robotics and Automation, workshop on People Detection and Tracking.
8 Vincent, Etienne, and Robert Laganiere, 2001, "Detecting planar homographies in an image pair", ISPA. Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis. In conjunction with 23rd International Conference on Information Technology Interfaces.
9 Lee Hojoon, et al., 2020, "Moving Object Detection and Tracking Based on Interaction of Static Obstacle Map and Geometric Model-Free Approach for Urban Autonomous Driving", IEEE Transactions on Intelligent Transportation Systems.
10 Redmon Joseph, et al., 2016, "You only look once: Unified, real-time object detection", Proceedings of the IEEE conference on computer vision and pattern recognition.