Browse > Article
http://dx.doi.org/10.7746/jkros.2021.16.2.122

RL-based Path Planning for SLAM Uncertainty Minimization in Urban Mapping  

Cho, Younghun (Dept. of Civil and Environmental Engineering, KAIST)
Kim, Ayoung (Dept. of Civil and Environmental Engineering, KAIST)
Publication Information
The Journal of Korea Robotics Society / v.16, no.2, 2021 , pp. 122-129 More about this Journal
Abstract
For the Simultaneous Localization and Mapping (SLAM) problem, a different path results in different SLAM results. Usually, SLAM follows a trail of input data. Active SLAM, which determines where to sense for the next step, can suggest a better path for a better SLAM result during the data acquisition step. In this paper, we will use reinforcement learning to find where to perceive. By assigning entire target area coverage to a goal and uncertainty as a negative reward, the reinforcement learning network finds an optimal path to minimize trajectory uncertainty and maximize map coverage. However, most active SLAM researches are performed in indoor or aerial environments where robots can move in every direction. In the urban environment, vehicles only can move following road structure and traffic rules. Graph structure can efficiently express road environment, considering crossroads and streets as nodes and edges, respectively. In this paper, we propose a novel method to find optimal SLAM path using graph structure and reinforcement learning technique.
Keywords
SLAM; Path Planning; Reinforcement Learning; Mobile Robot;
Citations & Related Records
연도 인용수 순위
  • Reference
1 E. W. Dijkstra, "A note on two problems in connexion with graphs," Numerische Mathematik, vol. 1, pp. 269-271, 1959, DOI: 10.1007/BF01386390.   DOI
2 P. E. Hart, N. J. Nilsson, and B. Raphael, " A Formal Basis for the Heuristic Determination of Minimum Cost Paths," IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, Jul., 1968, DOI: 10.1109/TSSC.1968.300136.   DOI
3 S. Singh, A. Barto, R. Grupen, and C. Connolly, "Robust Reinforcement Learning in Motion Planning," Advances in Neural Information Processing Systems 6 (NIPS 1993), 1993, [Online], https://proceedings.neurips.cc/paper/1993/hash/3d8e28caf901313a554cebc7d32e67e5-Abstract.html.
4 A. W. Moore and C. G. Atkeson, "The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces," Machine Learning, vol. 21, pp. 199-233, 1995, DOI: 10.1007/bf00993591.   DOI
5 L. Nardi and C. Stachniss, "Actively Improving Robot Navigation On Different Terrains Using Gaussian Process Mixture Models," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, DOI: 10.1109/ICRA.2019.8794079.   DOI
6 J. J. Park, J. H. Kim, and J. B. Song, "Path planning for a robot manipulator based on probabilistic roadmap and reinforcement learning," International Journal of Control, Automation, and Systems, vol. 5, no. 6, pp. 674-680, 2007, [Online], https://koreauniv.pure.elsevier.com/en/publications/path-planning-fora-robot-manipulator-based-on-probabilistic-road.
7 T. Kollar and N. Roy, "Trajectory Optimization using Reinforcement Learning for M ap Exploration," The International Journal of Robotics Research, vol. 27, no. 2, pp. 175-196, 2008.   DOI
8 L. Murphy and P. Newman, "Risky Planning on Probabilistic Costmaps for Path Planning in Outdoor Environments," IEEE Transactions on Robotics, vol. 29, no. 2, pp. 445-457, 2012, DOI: 10.1109/TRO.2012.2227216.   DOI
9 A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, "CARLA: An Open Urban Driving Simulator," 1st Annual Conference on Robot Learning, PMLR, vol. 78, pp. 1-16, 2017, [Online], http://proceedings.mlr.press/v78/dosovitskiy17a.html.
10 C. Leung, S. Huang, and G. Dissanayake, "Active SLAM using Model Predictive Control and Attractor based Exploration," 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006, DOI: 10.1109/IROS.2006.282530.   DOI
11 T. Shan and B. Englot, "LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2018, DOI: 10.1109/IROS.2018.8594299.   DOI
12 B. Zhang, Z. Mao, W. Liu, and J. Liu, "Geometric Reinforcement Learning for Path Planning of UAVs," Journal of Intelligent & Robotic Systems, vol. 77, no. 2, pp. 391-409, 2015, DOI: 10.1007/s10846-013-9901-z.   DOI