DOI QR코드

DOI QR Code

깊이 추정 및 커널 필터링 기반 Visual-Inertial Odometry

Visual-Inertial Odometry Based on Depth Estimation and Kernel Filtering Strategy

  • 송지민 ;
  • 조형기 ;
  • 이상준
  • Jimin Song (Jeonbuk National University) ;
  • HyungGi Jo (Jeonbuk National University) ;
  • Sang Jun Lee (Jeonbuk National University)
  • 투고 : 2024.05.21
  • 심사 : 2024.07.01
  • 발행 : 2024.08.31

초록

Visual-inertial odometry (VIO) is a method that leverages sensor data from a camera and an inertial measurement unit (IMU) for state estimation. Whereas conventional VIO has limited capability to estimate scale of translation, the performance of recent approaches has been improved by utilizing depth maps obtained from RGB-D camera, especially in indoor environments. However, the depth map obtained from the RGB-D camera tends to rapidly lose accuracy as the distance increases, and therefore, it is required to develop alternative method to improve the VIO performance in wide environments. In this paper, we argue that leveraging depth map estimated from a deep neural network has benefits to state estimation. To improve the reliability of depth information utilized in VIO algorithm, we propose a kernel-based sampling strategy to filter out depth values with low confidence. The proposed method aims to improve the robustness and accuracy of VIO algorithms by selectively utilizing reliable values of estimated depth maps. Experiments were conducted on real-world custom dataset acquired from underground parking lot environments. Experimental results demonstrate that the proposed method is effective to improve the performance of VIO, exhibiting potential for the use of depth estimation network for state estimation.

키워드

과제정보

이 성과는 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (No. RS-2024-00346415).

참고문헌

  1. R. Bloss. "Sensor Innovations Helping Unmanned Vehicles Rapidly Growing Smarter, Smaller, more Autonomous and more Powerful for Navigation, Mapping, and Target Sensing," Sensor Review, Vol. 35, No. 1, pp. 6-9, 2015.
  2. J. Wang, A. Chortos, "Control Strategies for Soft Robot Systems," Advanced Intelligent Systems, Vol. 4, No. 5, 2022.
  3. V. S. D. M. Sahu, P. Samal, C. K. Panigrahi, "Modelling, and Control Techniques of Robotic Manipulators: A Review," Materials Today: Proceedings, Vol. 56, No. 5, pp. 2758-2766, 2022.
  4. P. Stibinger, G. Broughton, F. Majer, Z. Rozsypalek, A. Wang, K. Jindal, A. Zhou, D. Thakur, G. Loianno, T. Krajnik, M. Saska, "Mobile Manipulator for Autonomous Localization, Grasping and Precise Placement of Construction Material in a Semi-structured Environment," IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 2595-2602, 2021.
  5. J. Nidamanuri, C. Nibhanupudi, R. Assfalg, H. Venkataraman, "A Progressive Review: Emerging Technologies for ADAS Driven Solutions," IEEE Transactions on Intelligent Vehicles, Vol. 7, No. 2, pp. 326-341, 2021.
  6. R. Munoz-Salinas, R. Medina-Carnicer, "UcoSLAM: Simultaneous Localization and Mapping by Fusion of Keypoints and Squared Planar Markers," Pattern Recognition, Vol. 101, pp. 107193, 2020.
  7. S. Hong, A. Bangunharcana, J. Park, M. Choi, H. Shin, "Visual SLAM-based Robotic Mapping Method for Planetary Construction," Sensors, Vol. 21, No. 22, pp. 7715, 2021.
  8. W. Deng, K. Huang, X. Chen, Z. Zhou, C. Shi, R. Guo, H. Zhang, "Semantic Rgb-d Slam for Rescue Robot Navigation," IEEE Access, Vol. 8, pp. 221320-221329, 2020.
  9. C. Bai, T. Xiao, Y. Chen, H. Wang, F. Zhang, X. Gao, "Faster-LIO: Lightweight Tightly Coupled LiDAR-inertial Odometry Using Parallel Sparse Incremental Voxels," IEEE Robotics and Automation Letters, Vol. 7, No. 2, pp. 4861-4868, 2022.
  10. S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, R. Siegwart, "A Robust and Modular Multi-sensor Fusion Approach Applied to Mav Navigation," IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
  11. S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, R. Siegwart, "Real-time Onboard Visual-inertial State Estimation and Self-calibration of MAVs in Unknown Environments," IEEE International Conference on Robotics and Automation, 2012.
  12. M. Bloesch, S. Omari, M. Hutter, R Siegwart, "Robust Visual Inertial Odometry Using a Direct EKF-based Approach," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
  13. Z. Yang, S. Shen, "Monocular Visual-inertial State Estimation with Online Initialization and Camera-IMU Extrinsic Calibration," IEEE Transactions on Automation Science and Engineering, Vol. 14, No. 1, pp. 39-51, 2016.
  14. Z. Shan, R. Li, S. Schwertfeger, "RGBD-inertial Trajectory Estimation and Mapping for Ground Robots," Sensors, Vol. 19, No. 10, pp. 2251, 2019.
  15. D. Eigen, C. Puhrsch, R. Fergus, "Depth Map Prediction from a Single Image Using a Multi-scale Deep Network," Advances in Neural Information Processing Systems, 2014.
  16. J. H. Lee, M. Han, D. W. Ko, I. H. Suh, "From Big to Small: Multi-scale Local Planar Guidance for Monocular Depth Estimation," arXiv preprint, 2019.
  17. G. Huang, Z. Liu, L. V. D. M, K. Q. Weinberger, "Densely Connected Convolutional Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  18. Q. Wang, D. Dai, L. Hoyer, L. V. Gool, O. Fink, "Domain Adaptive Semantic Segmentation with Self-supervised Depth Estimation," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
  19. Q. Dai, V. Patil, S. Hecker, D. Dai, L. V. Gool, K. Schindler, "Self-supervised Object Motion and Depth Estimation from Video," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.
  20. Y. Wang, W. Chao, D. Garg, B. Hariharan, M. Campbell, K. Q. Weinberger, "Pseudo-lidar from Visual Depth Estimation: Bridging the Gap in 3d Object Detection for Autonomous Sriving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
  21. D. Wofk, R. Ranftl, M. Muller, V. Koltun, "Monocular Visual-inertial Depth Estimation," IEEE International Conference on Robotics and Automation (ICRA), 2023.
  22. N. Merrill, P. Geneva, G. Huang, "Robust Monocular Visual-inertial Depth Completion for Embedded Systems," IEEE International Conference on Robotics and Automation (ICRA), 2021.
  23. Y. Almalioglu, M. Turan, M. R. U. Saputra, P. P. D. Gusmao, A. Markham, N. Trigoni, "SelfVIO: Self-supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation," Neural Networks, Vol. 150, pp. 119-136, 2022.
  24. X. Zuo, N. Merrill, W. Li, Y. Liu, M. Pollefeys, G. Huang, "CodeVIO: Visual-inertial Odometry with Learned Optimizable Dense Depth," IEEE International Conference on Robotics and Automation (ICRA), 2021.
  25. K. Sartipi, T. Do, T. Ke, K. Vuong, S. I. Roumeliotis, "Deep Depth Estimation from Visual-inertial Slam," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
  26. L. Chen, G. Papandreou, I. Kokkinos. K. Murphy, A. L. Yuille, "Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected Crfs," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 4, pp. 834-848, 2017.
  27. T. Qin, P. Li, S. Shen, "Vins-mono: A Robust and Versatile Monocular Visual-inertial State Estimator," IEEE Transactions on Robotics, Vol. 34, No. 4, pp. 1004-1020, 2018.
  28. J. Civera, A. J. Davison, J. M. M. Montiel, "Inverse Depth Parametrization for Monocular SLAM," IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 932-945, 2008.
  29. S. Agarwal, K. Mierle, "Ceres Solver: Tutorial & Reference," Google Inc, 2012.
  30. J. Sturm, N. Engelhard, F. Endres, W. Burgard, D. Cremers, "A Benchmark for the Evaluation of RGB-D SLAM Systems," IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.