Browse > Article
http://dx.doi.org/10.4218/etrij.2021-0088

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving  

Han, Seung-Jun (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute)
Kang, Jungyu (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute)
Min, Kyoung-Wook (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute)
Choi, Jungdan (Autonomous Driving Intelligence Research Section, Intelligent Robotics Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute)
Publication Information
ETRI Journal / v.43, no.4, 2021 , pp. 603-616 More about this Journal
Abstract
Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.
Keywords
Autonomous driving; LiDAR; odometry; self-driving; simultaneous localization and mapping; spherical range image;
Citations & Related Records
연도 인용수 순위
  • Reference
1 A. Nicolai et al., Deep learning for laser based odometry estimation, in RSS Workshop Limits and Potentials of Deep Learning in Robotics, vol. 184, 2016.
2 W. Wang et al., DeepPCO: End-to-end point cloud odometry through deep parallel neural network, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 3248-3254.
3 Y. Cho, G. Kim, and A. Kim, Unsupervised geometry-aware deep lidar odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), Aug. 2020, pp. 2145-2152.
4 Y. Zhou and O. Tuzel, VoxelNet: End-to-end learning for point cloud based 3D object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Salt lake city, UT, USA), June 2018, pp. 4490-4499.
5 W. Lu et al., DeepVCP: An end-to-end deep neural network for point cloud registration, in Proc. IEEE/CVF Int. Conf. Comput. Vis. (Seoul, Rep. of Korea), Oct. 2019, pp. 12-21.
6 J. L. Blanco, A tutorial on SE(3) transformation parameterizations and on-manifold optimization, Tech. Rep. 012010, University of Malaga, 2010.
7 H. Badino et al., Fast and accurate computation of surface normals from range images, in Proc. IEEE Int. Conf. Robot. Autom. (Shanghai, China), 2011, pp. 3084-3091.
8 H. Farid and E. P. Simoncelli, Differentiation of discrete multidimensional signals, IEEE Trans. Image Process. 13 (2004), no. 4, 496-508.   DOI
9 E. Yurtsever et al., A survey of autonomous driving: Common practices and emerging technologies, IEEE Access 8 (2020), 58443-58469.   DOI
10 S.-J. Han et al., Robust ego-motion estimation and map matching technique for autonomous vehicle localization with high definition digital map, in Proc. Int. Conf. Inf. Commun. Technol. Convergence, (Jeju, Rep. of Korea), Oct. 2018, pp. 630-635.
11 G. Grisetti et al., A tutorial on graph-based SLAM, IEEE Intell. Trans. Syst. Mag. 2 (2010), no. 4, 31-43.   DOI
12 J. Zhang and S. Singh, Low-drift and real-time lidar odometry and mapping, Auto. Robots 41 (2017), no. 2, 401-416.   DOI
13 J. Engel, V. Koltun, and D. Cremers, Direct sparse odometry, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2017), no. 3, 611-625.   DOI
14 J. E. Deschaud, IMLS-SLAM: Scan-to-model matching based on 3D data, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 2480-2485.
15 Y. Ma et al., An Invitation to 3-D Vision: From Images to Geometric Models, vol. 26, Springer, NY, USA, 2012.
16 A. Censi, An ICP variant using a point-to-line metric, in Proc. IEEE Int. Conf. Robot. Autom. (Pasadena, CA, USA), May 2008, pp. 19-25.
17 A. Segal, D. Haehnel, and S. Thrun, Generalized-ICP. Robotics: Science and systems, in Proc. Robot.: Sci. Syst. (Seattle, WA, USA), June 2009.
18 G. Chen et al., PSF-LO: Parameterized semantic features based lidar odometry, arXiv preprint, CoRR, 2020, arXiv: 2010.13355.
19 C. Schulz and A. Zell, Real-time graph-based SLAM with occupancy normal distributions transforms, in Proc. IEEE Int. Conf. Robot. Autom. (Paris, France), May 2020, pp. 3106-3111.
20 K. Ji et al., CPFG-SLAM: A robust simultaneous localization and mapping based on LIDAR in off-road environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), 2018, pp. 650-655.
21 J. Han, M. Choi, and Y. Kwon, 40-TFLOPS artificial intelligence processor with function-safe programmable many-cores for ISO26262 ASIL-D, ETRI J. 42 (2020), no. 4, 468-479.   DOI
22 Z. J. Yew and G. H. Lee, 3DFeat-net: Weakly supervised local 3D features for point cloud registration, in Proc. Eur. Conf. Comput. Vis. (Munich, Germany), Sept. 2018, pp. 607-623.
23 C. Forster, M. Pizzoli, and D. Scaramuzza, SVO: Fast semidirect monocular visual odometry, in Proc. IEEE Int. Conf. Robot. Autom. (Hong Kong, China), May 2014, pp. 15-22.
24 X. Chen et al., SuMa++: Efficient LiDAR-based semantic SLAM, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (Macau, China), Nov. 2019, pp. 4530-4537.
25 W. H. Press et al., Numerical recipes 3rd edition: The art of scientific computing, Cambridge University Press, Cambridge, UK, 2007.
26 A. Geiger, P. Lenz, and R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Providence, RI, USA), June 2012, pp. 3354-3361.
27 K. L. Low, Linear least-squares optimization for point-to-plane ICP surface registration, Tech. Rep. TR04-004, Department of Computer Science, University of North Carolina at Chapel Hill, 2004, pp. 1-3.
28 J. Saarinen et al., Normal distributions transform occupancy maps: Application to large-scale online 3D mapping, in Proc. IEEE Int. Conf. Robot. Autom. (Karlsruhe, Germany), May 2013, pp. 2233-2238.
29 J. Zhu, Image gradient-based joint direct visual odometry for stereo camera, in Proc. Int. Joint Conf. Artif. Intell. (Melbourne, Australia), Aug. 2017, pp. 4558-4564.
30 J. Behley and C. Stachniss, Efficient surfel-based SLAM using 3D laser range data in urban environments, Robot: Sci. Syst. 2018 (2018).
31 J. Li et al., DL-SLAM: Direct 2.5D LiDAR SLAM for autonomous driving, in Proc. IEEE Intell. Veh. Symp. (Paris, France), June 2019, pp. 1205-1210.
32 Y. S. Shin, Y. S. Park, and A. Kim, Direct visual SLAM using sparse depth for camera-lidar system, in Proc. IEEE Int. Conf. Robot. Autom. (Brisbane, Australia), May 2018, pp. 5144-5151.
33 L. Sun et al., DLO: Direct LiDAR odometry for 2.5D outdoor environment, in Proc. IEEE Intell. Veh. Symp. (Changshu, China), June 2018, pp. 1-5.