DOI QR코드

DOI QR Code

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment

카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발

  • 김유진 (서울대학교 차량 동역학 및 제어 연구실) ;
  • 이호준 (서울대학교 차량 동역학 및 제어 연구실) ;
  • 이경수 (서울대학교 차량 동역학 및 제어 연구실)
  • Received : 2021.11.10
  • Accepted : 2021.10.04
  • Published : 2021.12.31

Abstract

This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Keywords

Acknowledgement

본 연구는 국토교통부 도심도로 자율협력주행 안전·인프라 연구 사업의 연구비지원(과제번호 19PQOW-B152473-01)에 의해 수행되었습니다.

References

  1. Banerjee Koyel, et al., 2018, "Online camera lidar fusion and object detection on hybrid data for autonomous driving", IEEE Intelligent Vehicles Symposium (IV).
  2. Cho Hyunggi, et al., 2014, "A multi-sensor fusion system for moving object detection and tracking in urban driving environments", IEEE International Conference on Robotics and Automation (ICRA).
  3. Gao Hongbo, et al., 2018, "Object classifica tion using CNN-based fusion of vision and LIDAR in autonomous vehicle environment", IEEE Transactions on Industrial Informatics 14.9: pp. 4224~4231. https://doi.org/10.1109/tii.2018.2822828
  4. Chavez-Garcia, Ricardo Omar, and Olivier Aycard, 2015, "Multiple sensor fusion and classification for moving object detection and tracking", IEEE Transactions on Intelligent Transportation Systems 17.2: pp.525~534. https://doi.org/10.1109/TITS.2015.2479925
  5. Wang, Dominic Zeng, Ingmar Posner, and Paul Newman, "Model-free detection and tracking of dynamic objects with 2D lidar", The International Journal of Robotics Research 34.7: pp. 1039~1063. https://doi.org/10.1177/0278364914562237
  6. Thuy, Michael, and Fernando Puente Leon, 2009, "Non-linear, shape independent object tracking based on 2d lidar data", IEEE Intelligent Vehicles Symposium.
  7. Johnsen, Swantje, and Ashley Tews, 2009, "Realtime object tracking and classification using a static camera", Proceedings of IEEE International Conference on Robotics and Automation, workshop on People Detection and Tracking.
  8. Lee Hojoon, et al., 2020, "Moving Object Detection and Tracking Based on Interaction of Static Obstacle Map and Geometric Model-Free Approach for Urban Autonomous Driving", IEEE Transactions on Intelligent Transportation Systems.
  9. Redmon Joseph, et al., 2016, "You only look once: Unified, real-time object detection", Proceedings of the IEEE conference on computer vision and pattern recognition.
  10. Vincent, Etienne, and Robert Laganiere, 2001, "Detecting planar homographies in an image pair", ISPA. Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis. In conjunction with 23rd International Conference on Information Technology Interfaces.