DOI QR코드

DOI QR Code

다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM

Omni-directional Visual-LiDAR SLAM for Multi-Camera System

  • 투고 : 2022.03.10
  • 심사 : 2022.04.19
  • 발행 : 2022.08.31

초록

Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

키워드

과제정보

This research was supported in part by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2022-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), and in part by the Technology Innovation Program (or Industrial Strategic Technology Development Program-ATC+) (20009546, Development of service robot core technology that can provide advanced service in real life) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea)

참고문헌

  1. G. Bresson, Z. Alsayed, L. Yu, and S. Glaser, "Simultaneous localization and mapping: A survey of current trends in autonomous driving," IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 15-64, Sept., 1964, DOI: 10.1109/TIV.2017.2749181.
  2. C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, "Past, present, and future of simultaneous localization and mapping: Toward the robust perception age," IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec., 2016, DOI: 10.1109/TRO.2016.2624754.
  3. R. Mur-Artal and J. D. Tardos, "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras," IEEE Transactions on Robotics, vol. 33, no. 5, pp. 255-1262, Oct., 2017, DOI: 10.1109/TRO.2017.2705103.
  4. J. Engel, T. Schops, and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM," European Conference on Computer Vision, pp. 834-849, 2014, DOI: 10.1007/978-3-319-10605-2_54.
  5. G. Klein and D. Murray, "Parallel tracking and mapping for small AR workspaces," 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 2007, DOI: 10.1109/ISMAR.2007.4538852.
  6. J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 611-625, Mar., 201 DOI: 10.1109/TPAMI.2017.2658 577.
  7. C. Forster, M. Pizzoli, and D. Scaramuzza, "SVO: Fast semi-direct monocular visual odometry," 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014, DOI: 10.1109/ICRA.2014.6906584.
  8. H. Chen, K. Wang, W. Hu, K. Yang, R. Cheng, X. Huang, and J. Bai, "PALVO: visual odometry based on panoramic annular lens," Optics Express, vol. 27, no. 17, pp. 24481-24497, 2019, DOI: 10.1364/OE.27.024481.
  9. D. Scaramuzza and R. Siegwart, "Monocular omnidirectional visual odometry for outdoor ground vehicles," International Conference on Computer Vision Systems, pp. 206-215, 2008, DOI: 10.1007/978-3-540-79547-6_20.
  10. P. Liu, L. Heng, T. Sattler, A. Geiger, and M. Pollefeys, "Direct visual odometry for a fisheye-stereo camera," 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 2017, DOI: 10.1109/IROS.2017.8205988.
  11. S. Ji, Z. Qin, J. Shan, and M. Liu, "Panoramic SLAM from a multiple fisheye camera rig," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 159, pp. 169-183, Jan., 2020, DOI: 10.1016/j.isprsjprs.2019.11.014.
  12. Y. Yang, D. Tang, D. Wang, W. Song, J. Wang, and M. Fu, "Multi-camera visual SLAM for off-road navigation," Robotics and Autonomous Systems, vol. 128, Jun., 2020, DOI: 10.1016/j.robot. 2020.103505.
  13. C. Won, H. Seok, Z. Cui, M. Pollefeys, and J. Lim, "OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems," 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, DOI: 10.1109/ICRA40945.2020.9196695.
  14. Y. Wang, S. Cai, S.-J. Li, Y. Guo, T. Li, and M.-M. Cheng, "CubemapSLAM: A Piecewise-Pinhole Monocular Fisheye SLAM System," Asian Conference on Computer Vision, pp. 34-49, 2018, DOI: 10.1007/978-3-030-20876-9_3.
  15. S. Urban and S. Hinz, "MultiCol-SLAM - A Modular Real-Time Multi-Camera SLAM System," arXiv:1610.07336, 2016, DOI: 10.48550/arXiv. 1610.07336.
  16. G. Pandey, J . R. McBride, S. Savarese, and R. M. Eustice, "Automatic Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information" Twenty-Sixth AAAI Conference on 2012, [Online] https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/5029/5371.
  17. G. Pandey, J . R. McBride, and R. M. Eustice, "Ford Campus vision and lidar data set," Sage Journals, vol. 30, no. 13, 2011, DOI:10.1177/0278364911400640.
  18. Z. Javed and G. W. Kim, "PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping," The Journal of Supercomputing, vol. 78, 2022, DOI: 10.1007/s11227-021-04198-1.
  19. R. Kummerle, G. Grisetti, H. Strasdat, K. Konolig, and W. Burgard, "G2o: A general framework for graph optimization," 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 2011, DOI: 10.1109/ICRA.2011.5979949.
  20. A. Angeli, D. Filliat, S. Doncieux, and J.-A. Meyer, "Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words," IEEE Transactions on Robotics, vol. 24, no. 5, Oct., 2008, DOI: 10.1109/TRO.2008.2004514.