Browse > Article
http://dx.doi.org/10.12941/jksiam.2021.25.149

Markov Decision Process-based Potential Field Technique for UAV Planning  

MOON, CHAEHWAN (DEPARTMENT OF AEROSPACE ENGINEERING, KOREA ADVANCED SCIENCE INSTITUTE OF TECHNOLOGY)
AHN, JAEMYUNG (DEPARTMENT OF AEROSPACE ENGINEERING, KOREA ADVANCED SCIENCE INSTITUTE OF TECHNOLOGY)
Publication Information
Journal of the Korean Society for Industrial and Applied Mathematics / v.25, no.4, 2021 , pp. 149-161 More about this Journal
Abstract
This study proposes a methodology for mission/path planning of an unmanned aerial vehicle (UAV) using an artificial potential field with the Markov Decision Process (MDP). The planning problem is formulated as an MDP. A low-resolution solution of the MDP is obtained and used to define an artificial potential field, which provides a continuous UAV mission plan. A numerical case study is conducted to demonstrate the validity of the proposed technique.
Keywords
Markov decision process(MDP); Sequential decision-making process; Potential field algorithm; Artificial potential field(APF); Mission planning;
Citations & Related Records
연도 인용수 순위
  • Reference
1 U. Choi, S. Jeong, J. Ahn, Autonomous Single UAV Reconnaissance Mission Planning in Multi-Base and Multi-Threat Environment Based on Markov Decision Process, 2016 KSAS Fall Conference, Jeju, Korea 2016.
2 S chesvold, D., Tang, J., Ahmed, B. M., Altenburg, K., & Nygard, K. E. POMDP planning for high level UAV decisions: Search vs. strike. In In Proceedings of the 16th International Conference on Computer Applications in Industry and Engineering, 2003.
3 Lei, G., Dong, M. Z., Xu, T., & Wang, L. Multi-agent path planning for unmanned aerial vehicle based on threats analysis. In 2011 3rd International Workshop on Intelligent Systems and Applications, IEEE, 2011, pp. 1-4.
4 Challita, U., Saad, W., & Bettstetter, C., Deep reinforcement learning for interference-aware path planning of cellular-connected UAVs. In 2018 IEEE International Conference on Communications (ICC) IEEE, 2018, pp. 1-7.
5 Jeong, B. M., Ha, J. S., & Choi, H. L. MDP-based mission planning for multi-UAV persistent surveillance. In 2014 14th International Conference on Control, Automation and Systems, ICCAS 2014, IEEE, 2014, pp. 831-834.
6 Zeng, J., Ju, R., Qin, L., Hu, Y., Yin, Q., & Hu, C., Navigation in Unknown Dynamic Environments Based on Deep Reinforcement Learning. Sensors, 19(18) (2019), 3837.   DOI
7 Littman, M. L., Dean, T. L., & Kaelbling, L. P., On the complexity of solving Markov decision problems. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, 1995, pp. 394-402.
8 Waharte, S., & Trigoni, N., Supporting search and rescue operations with UAVs. In 2010 International Conference on Emerging Security Technologies, IEEE, 2010, pp. 142-147.
9 Bellman, R., A Markovian decision process. Journal of mathematics and mechanics, (1957), 679-684.
10 Papadimitriou, C. H., & Tsitsiklis, J. N., The complexity of Markov decision processes. Mathematics of operations research, 12(3) (1987), 441-450.   DOI
11 Moon, C., UAV Mission Planning Using MDP-based Artificial Potential Field, Master's Thesis, Korea Advanced Institute of Science and Technology (KAIST), 2021 (written in Korean).
12 Jakes, W. C., & Cox, D. C., Microwave mobile communications. Wiley-IEEE Press, 1994.
13 B. Jeong G. Kim ,J. Ha , H. Choi., MDP based Mission Planning for multi-agent information gathering, 2013 KSAS Fall Conference, Jeju, Korea 2013.
14 Ure, N. K., Chowdhary, G., Chen, Y. F., How, J. P., & Vian, J. Distributed learning for planning under uncertainty problems with heterogeneous teams. Journal of Intelligent & Robotic Systems, 74(1-2) (2014), 529-544.   DOI
15 Bethke, B., Redding, J. and How, J. P., Agent Capability in Persistent Mission Planning using Approximate Dynamic Programming, 2010 American Control Conference, 2010.
16 Bhowal, A. Potential Field Methods for Safe Reinforcement Learning: Exploring Q-Learning and Potential Fields. Master's thesis, TU Delft, Delft, Netherlands, 2017.
17 Shapley, L. S., Stochastic games. Proceedings of the national academy of sciences, 39(10) (1953), 1095-1100.   DOI