과제정보
이 성과는 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (No. RS-2024-00346415).
참고문헌
- R. Bloss. "Sensor Innovations Helping Unmanned Vehicles Rapidly Growing Smarter, Smaller, more Autonomous and more Powerful for Navigation, Mapping, and Target Sensing," Sensor Review, Vol. 35, No. 1, pp. 6-9, 2015.
- J. Wang, A. Chortos, "Control Strategies for Soft Robot Systems," Advanced Intelligent Systems, Vol. 4, No. 5, 2022.
- V. S. D. M. Sahu, P. Samal, C. K. Panigrahi, "Modelling, and Control Techniques of Robotic Manipulators: A Review," Materials Today: Proceedings, Vol. 56, No. 5, pp. 2758-2766, 2022.
- P. Stibinger, G. Broughton, F. Majer, Z. Rozsypalek, A. Wang, K. Jindal, A. Zhou, D. Thakur, G. Loianno, T. Krajnik, M. Saska, "Mobile Manipulator for Autonomous Localization, Grasping and Precise Placement of Construction Material in a Semi-structured Environment," IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 2595-2602, 2021.
- J. Nidamanuri, C. Nibhanupudi, R. Assfalg, H. Venkataraman, "A Progressive Review: Emerging Technologies for ADAS Driven Solutions," IEEE Transactions on Intelligent Vehicles, Vol. 7, No. 2, pp. 326-341, 2021.
- R. Munoz-Salinas, R. Medina-Carnicer, "UcoSLAM: Simultaneous Localization and Mapping by Fusion of Keypoints and Squared Planar Markers," Pattern Recognition, Vol. 101, pp. 107193, 2020.
- S. Hong, A. Bangunharcana, J. Park, M. Choi, H. Shin, "Visual SLAM-based Robotic Mapping Method for Planetary Construction," Sensors, Vol. 21, No. 22, pp. 7715, 2021.
- W. Deng, K. Huang, X. Chen, Z. Zhou, C. Shi, R. Guo, H. Zhang, "Semantic Rgb-d Slam for Rescue Robot Navigation," IEEE Access, Vol. 8, pp. 221320-221329, 2020.
- C. Bai, T. Xiao, Y. Chen, H. Wang, F. Zhang, X. Gao, "Faster-LIO: Lightweight Tightly Coupled LiDAR-inertial Odometry Using Parallel Sparse Incremental Voxels," IEEE Robotics and Automation Letters, Vol. 7, No. 2, pp. 4861-4868, 2022.
- S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, R. Siegwart, "A Robust and Modular Multi-sensor Fusion Approach Applied to Mav Navigation," IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
- S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, R. Siegwart, "Real-time Onboard Visual-inertial State Estimation and Self-calibration of MAVs in Unknown Environments," IEEE International Conference on Robotics and Automation, 2012.
- M. Bloesch, S. Omari, M. Hutter, R Siegwart, "Robust Visual Inertial Odometry Using a Direct EKF-based Approach," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
- Z. Yang, S. Shen, "Monocular Visual-inertial State Estimation with Online Initialization and Camera-IMU Extrinsic Calibration," IEEE Transactions on Automation Science and Engineering, Vol. 14, No. 1, pp. 39-51, 2016.
- Z. Shan, R. Li, S. Schwertfeger, "RGBD-inertial Trajectory Estimation and Mapping for Ground Robots," Sensors, Vol. 19, No. 10, pp. 2251, 2019.
- D. Eigen, C. Puhrsch, R. Fergus, "Depth Map Prediction from a Single Image Using a Multi-scale Deep Network," Advances in Neural Information Processing Systems, 2014.
- J. H. Lee, M. Han, D. W. Ko, I. H. Suh, "From Big to Small: Multi-scale Local Planar Guidance for Monocular Depth Estimation," arXiv preprint, 2019.
- G. Huang, Z. Liu, L. V. D. M, K. Q. Weinberger, "Densely Connected Convolutional Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
- Q. Wang, D. Dai, L. Hoyer, L. V. Gool, O. Fink, "Domain Adaptive Semantic Segmentation with Self-supervised Depth Estimation," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
- Q. Dai, V. Patil, S. Hecker, D. Dai, L. V. Gool, K. Schindler, "Self-supervised Object Motion and Depth Estimation from Video," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.
- Y. Wang, W. Chao, D. Garg, B. Hariharan, M. Campbell, K. Q. Weinberger, "Pseudo-lidar from Visual Depth Estimation: Bridging the Gap in 3d Object Detection for Autonomous Sriving," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
- D. Wofk, R. Ranftl, M. Muller, V. Koltun, "Monocular Visual-inertial Depth Estimation," IEEE International Conference on Robotics and Automation (ICRA), 2023.
- N. Merrill, P. Geneva, G. Huang, "Robust Monocular Visual-inertial Depth Completion for Embedded Systems," IEEE International Conference on Robotics and Automation (ICRA), 2021.
- Y. Almalioglu, M. Turan, M. R. U. Saputra, P. P. D. Gusmao, A. Markham, N. Trigoni, "SelfVIO: Self-supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation," Neural Networks, Vol. 150, pp. 119-136, 2022.
- X. Zuo, N. Merrill, W. Li, Y. Liu, M. Pollefeys, G. Huang, "CodeVIO: Visual-inertial Odometry with Learned Optimizable Dense Depth," IEEE International Conference on Robotics and Automation (ICRA), 2021.
- K. Sartipi, T. Do, T. Ke, K. Vuong, S. I. Roumeliotis, "Deep Depth Estimation from Visual-inertial Slam," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- L. Chen, G. Papandreou, I. Kokkinos. K. Murphy, A. L. Yuille, "Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected Crfs," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 4, pp. 834-848, 2017.
- T. Qin, P. Li, S. Shen, "Vins-mono: A Robust and Versatile Monocular Visual-inertial State Estimator," IEEE Transactions on Robotics, Vol. 34, No. 4, pp. 1004-1020, 2018.
- J. Civera, A. J. Davison, J. M. M. Montiel, "Inverse Depth Parametrization for Monocular SLAM," IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 932-945, 2008.
- S. Agarwal, K. Mierle, "Ceres Solver: Tutorial & Reference," Google Inc, 2012.
- J. Sturm, N. Engelhard, F. Endres, W. Burgard, D. Cremers, "A Benchmark for the Evaluation of RGB-D SLAM Systems," IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.