DOI QR코드

DOI QR Code

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute) ;
  • Kang, Jungyu (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute) ;
  • Min, Kyoung-Wook (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute) ;
  • Choi, Jungdan (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute)
  • Received : 2021.03.07
  • Accepted : 2021.06.16
  • Published : 2021.08.01

Abstract

Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Keywords

Acknowledgement

This work was supported by Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (No. 2018-0-00327, Development of fully autonomous driving navigation AI technology in high-precision map shadow environment).

References

  1. E. Yurtsever et al., A survey of autonomous driving: Common practices and emerging technologies, IEEE Access 8 (2020), 58443-58469. https://doi.org/10.1109/access.2020.2983149
  2. S.-J. Han et al., Robust ego-motion estimation and map matching technique for autonomous vehicle localization with high definition digital map, in Proc. Int. Conf. Inf. Commun. Technol. Convergence, (Jeju, Rep. of Korea), Oct. 2018, pp. 630-635.
  3. G. Grisetti et al., A tutorial on graph-based SLAM, IEEE Intell. Trans. Syst. Mag. 2 (2010), no. 4, 31-43. https://doi.org/10.1109/MITS.2010.939925
  4. J. Zhang and S. Singh, Low-drift and real-time lidar odometry and mapping, Auto. Robots 41 (2017), no. 2, 401-416. https://doi.org/10.1007/s10514-016-9548-2
  5. A. Geiger, P. Lenz, and R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Providence, RI, USA), June 2012, pp. 3354-3361.
  6. C. Forster, M. Pizzoli, and D. Scaramuzza, SVO: Fast semidirect monocular visual odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Hong Kong, China), May 2014, pp. 15-22.
  7. J. Engel, V. Koltun, and D. Cremers, Direct sparse odometry, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2017), no. 3, 611-625. https://doi.org/10.1109/TPAMI.2017.2658577
  8. J. Zhu, Image gradient-based joint direct visual odometry for stereo camera, in Proc. Int. Joint Conf. Artif. Intell. (Melbourne, Australia), Aug. 2017, pp. 4558-4564.
  9. Y. Ma et al., An Invitation to 3-D Vision: From Images to Geometric Models, vol. 26, Springer, NY, USA, 2012.
  10. A. Censi, An ICP variant using a point-to-line metric, in Proc. IEEE Int. Conf. Robot. Autom. (Pasadena, CA, USA), May 2008, pp. 19-25.
  11. K. L. Low, Linear least-squares optimization for point-to-plane ICP surface registration, Tech. Rep. TR04-004, Department of Computer Science, University of North Carolina at Chapel Hill, 2004, pp. 1-3.
  12. A. Segal, D. Haehnel, and S. Thrun, Generalized-ICP. Robotics: Science and systems, in Proc. Robot.: Sci. Syst. (Seattle, WA, USA), June 2009.
  13. J. E. Deschaud, IMLS-SLAM: Scan-to-model matching based on 3D data, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 2480-2485.
  14. J. Behley and C. Stachniss, Efficient surfel-based SLAM using 3D laser range data in urban environments, Robot: Sci. Syst. 2018 (2018).
  15. X. Chen et al., SuMa++: Efficient LiDAR-based semantic SLAM, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 4530-4537.
  16. G. Chen et al., PSF-LO: Parameterized semantic features based lidar odometry, arXiv preprint, CoRR, 2020, arXiv: 2010.13355.
  17. J. Saarinen et al., Normal distributions transform occupancy maps: Application to large-scale online 3D mapping, in Proc. IEEE Int. Conf. Robot. Autom. (Karlsruhe, Germany), May 2013, pp. 2233-2238.
  18. C. Schulz and A. Zell, Real-time graph-based SLAM with occupancy normal distributions transforms, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), May 2020, pp. 3106-3111.
  19. K. Ji et al., CPFG-SLAM: A robust simultaneous localization and mapping based on LIDAR in off-road environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), 2018, pp. 650-655.
  20. Y. S. Shin, Y. S. Park, and A. Kim, Direct visual SLAM using sparse depth for camera-lidar system, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 5144-5151.
  21. L. Sun et al., DLO: Direct LiDAR odometry for 2.5D outdoor environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), June 2018, pp. 1-5.
  22. J. Li et al., DL-SLAM: Direct 2.5D LiDAR SLAM for autonomous driving, in Proc. IEEE Intell. Veh. Symp. (Paris, France), June 2019, pp. 1205-1210.
  23. A. Nicolai et al., Deep learning for laser based odometry estimation, in RSS Workshop Limits and Potentials of Deep Learning in Robotics, vol. 184, 2016.
  24. W. Wang et al., DeepPCO: End-to-end point cloud odometry through deep parallel neural network, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 3248-3254.
  25. Y. Cho, G. Kim, and A. Kim, Unsupervised geometry-aware deep lidar odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), Aug. 2020, pp. 2145-2152.
  26. Z. J. Yew and G. H. Lee, 3DFeat-net: Weakly supervised local 3D features for point cloud registration, in Proc. Eur. Conf. Comput. Vis. (Munich, Germany), Sept. 2018, pp. 607-623.
  27. Y. Zhou and O. Tuzel, VoxelNet: End-to-end learning for point cloud based 3D object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Salt lake city, UT, USA), June 2018, pp. 4490-4499.
  28. W. Lu et al., DeepVCP: An end-to-end deep neural network for point cloud registration, in Proc. IEEE/CVF Int. Conf. Comput. Vis. (Seoul, Rep. of Korea), Oct. 2019, pp. 12-21.
  29. J. L. Blanco, A tutorial on SE(3) transformation parameterizations and on-manifold optimization, Tech. Rep. 012010, University of Malaga, 2010.
  30. W. H. Press et al., Numerical recipes 3rd edition: The art of scientific computing, Cambridge University Press, Cambridge, UK, 2007.
  31. H. Badino et al., Fast and accurate computation of surface normals from range images, in Proc. IEEE Int. Conf. Robot. Autom. (Shanghai, China), 2011, pp. 3084-3091.
  32. H. Farid and E. P. Simoncelli, Differentiation of discrete multidimensional signals, IEEE Trans. Image Process. 13 (2004), no. 4, 496-508. https://doi.org/10.1109/TIP.2004.823819
  33. J. Han, M. Choi, and Y. Kwon, 40-TFLOPS artificial intelligence processor with function-safe programmable many-cores for ISO26262 ASIL-D, ETRI J. 42 (2020), no. 4, 468-479. https://doi.org/10.4218/etrij.2020-0128