Acknowledgement
This paper is based on the master's thesis of the first author (C. Moon) [17], which was originally written in Korean.
References
- Waharte, S., & Trigoni, N., Supporting search and rescue operations with UAVs. In 2010 International Conference on Emerging Security Technologies, IEEE, 2010, pp. 142-147.
- U. Choi, S. Jeong, J. Ahn, Autonomous Single UAV Reconnaissance Mission Planning in Multi-Base and Multi-Threat Environment Based on Markov Decision Process, 2016 KSAS Fall Conference, Jeju, Korea 2016.
- S chesvold, D., Tang, J., Ahmed, B. M., Altenburg, K., & Nygard, K. E. POMDP planning for high level UAV decisions: Search vs. strike. In In Proceedings of the 16th International Conference on Computer Applications in Industry and Engineering, 2003.
- Ure, N. K., Chowdhary, G., Chen, Y. F., How, J. P., & Vian, J. Distributed learning for planning under uncertainty problems with heterogeneous teams. Journal of Intelligent & Robotic Systems, 74(1-2) (2014), 529-544. https://doi.org/10.1007/s10846-013-9980-x
- Lei, G., Dong, M. Z., Xu, T., & Wang, L. Multi-agent path planning for unmanned aerial vehicle based on threats analysis. In 2011 3rd International Workshop on Intelligent Systems and Applications, IEEE, 2011, pp. 1-4.
- Challita, U., Saad, W., & Bettstetter, C., Deep reinforcement learning for interference-aware path planning of cellular-connected UAVs. In 2018 IEEE International Conference on Communications (ICC) IEEE, 2018, pp. 1-7.
- Bethke, B., Redding, J. and How, J. P., Agent Capability in Persistent Mission Planning using Approximate Dynamic Programming, 2010 American Control Conference, 2010.
- B. Jeong G. Kim ,J. Ha , H. Choi., MDP based Mission Planning for multi-agent information gathering, 2013 KSAS Fall Conference, Jeju, Korea 2013.
- Jeong, B. M., Ha, J. S., & Choi, H. L. MDP-based mission planning for multi-UAV persistent surveillance. In 2014 14th International Conference on Control, Automation and Systems, ICCAS 2014, IEEE, 2014, pp. 831-834.
- Bhowal, A. Potential Field Methods for Safe Reinforcement Learning: Exploring Q-Learning and Potential Fields. Master's thesis, TU Delft, Delft, Netherlands, 2017.
- Zeng, J., Ju, R., Qin, L., Hu, Y., Yin, Q., & Hu, C., Navigation in Unknown Dynamic Environments Based on Deep Reinforcement Learning. Sensors, 19(18) (2019), 3837. https://doi.org/10.3390/s19183837
- Bellman, R., A Markovian decision process. Journal of mathematics and mechanics, (1957), 679-684.
- Shapley, L. S., Stochastic games. Proceedings of the national academy of sciences, 39(10) (1953), 1095-1100. https://doi.org/10.1073/pnas.39.10.1953
- Papadimitriou, C. H., & Tsitsiklis, J. N., The complexity of Markov decision processes. Mathematics of operations research, 12(3) (1987), 441-450. https://doi.org/10.1287/moor.12.3.441
- Littman, M. L., Dean, T. L., & Kaelbling, L. P., On the complexity of solving Markov decision problems. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, 1995, pp. 394-402.
- Jakes, W. C., & Cox, D. C., Microwave mobile communications. Wiley-IEEE Press, 1994.
- Moon, C., UAV Mission Planning Using MDP-based Artificial Potential Field, Master's Thesis, Korea Advanced Institute of Science and Technology (KAIST), 2021 (written in Korean).