DOI QR코드

DOI QR Code

Study of Deep Reinforcement Learning-Based Agents for Controlled Flight into Terrain (CFIT) Autonomous Avoidance

CFIT 자율 회피를 위한 심층강화학습 기반 에이전트 연구

  • Received : 2021.06.01
  • Accepted : 2022.03.31
  • Published : 2022.06.30

Abstract

In Efforts to prevent CFIT accidents so far, have been emphasizing various education measures to minimize the occurrence of human errors, as well as enforcement measures. However, current engineering measures remain in a system (TAWS) that gives warnings before colliding with ground or obstacles, and even actual automatic avoidance maneuvers are not implemented, which has limitations that cannot prevent accidents caused by human error. Currently, various attempts are being made to apply machine learning-based artificial intelligence agent technologies to the aviation safety field. In this paper, we propose a deep reinforcement learning-based artificial intelligence agent that can recognize CFIT situations and control aircraft to avoid them in the simulation environment. It also describes the composition of the learning environment, process, and results, and finally the experimental results using the learned agent. In the future, if the results of this study are expanded to learn the horizontal and vertical terrain radar detection information and camera image information of radar in addition to the terrain database, it is expected that it will become an agent capable of performing more robust CFIT autonomous avoidance.

Keywords

References

  1. Russell, S., and Norvig, P., "Artificial Intelligence-A Modern Approach", Prentice Hall, Hoboken, New Jersey, 2009, pp.30-32.
  2. Shin, S. J., Jo, C. R., Jeon, H. S., Yoon, S. H., and Kim, T. Y., "A survey on deep reinforcement learning libraries", ETRI, 34(6), 2019, pp.87-99.
  3. Yun, H. J., Park, N. S., Yoon, J. K., and Son, Y. S., "Research trends on deep reinforcement learning", ETRI, 34(4), 2019, pp.1-14.
  4. Wo, J. H., "Collision avoidance for an unmanned surface vehicle using deep reinforcement learning", Ph.D. Thesis, Seoul National University, Seoul, Feb 2018.
  5. Sharma, T., "Optimum flight trajectories for terrain collision avoidance", Master's Thesis, Royal Melbourne Institute of Technology University, Melbourne, Australia, Mar 2006.
  6. Baomar, H., and Bentley, P. J., "Autonomous navigation and landing of airliners using artificial neural networks and learning by imitation", 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, Hawaii, USA, 2017.
  7. Kim, J. S., "Motion planning of robot manipulators for a smoother path using a twin delayed deep deterministic policy gradient with hindsight experience replay", Applied Sciences, 10(2), 575, 2020, pp.5-6.
  8. Fujimoto, S., Hoof, H., and Meger, D., "Addressing function approximation error in actor-critic methods", Proceedings of the 35th International Conference on Machine Learning, PMLR (Proceedings of Machine Learning Research), Stockholmsmassan, Stockholm, Sweden, 2018, pp.1587-1596.
  9. Xie, J., Peng, X., Wang, H., Niu, W., and Zheng, X., "UAV autonomous tracking and landing based on deep reinforcement learning strategy", Sensors, 20(19), 5630, 2020, pp.7-13.
  10. Moon, I. C., Kim, J. M, and Kim, D. J., "Modeling and simulation on One-vs-One air combat with deep reinforcement learning", Journal of the Korea Society for Simulation, 29(1), 2020, pp.39-46. https://doi.org/10.9709/JKSS.2020.29.1.039
  11. Meyer, E., Heiberg, A., Rasheed A., and San, A. O., "COLREG-compliant collision avoidance for unmanned surface vehicle using deep reinforcement learning", IEEE Access, 8, 2020, pp.165344-165364. https://doi.org/10.1109/access.2020.3022600
  12. Young, C. S., "Warning system concepts to prevent controlled flight into terrain (CFIT)", AIAA/IEEE Digital Avionics Systems Conference, Fort Worth, TX, USA, 1993, pp.463-474.
  13. Zhang, Y., Antonsson, E. K., and Grote, K., "A new threat assessment measure for collision avoiddance system", 2006 IEEE Intelligent Transportation Systems Conference, Toronto, ON, Canada, 2006, pp.968-975.
  14. Kallstrom, J., and Heintz, F., "Reinforcement learning for computer generated forces using open-source software", Interservice/Industry Training, Simulation, and Education Conference, Orlando, FL, USA, 2019, Paper No. 19197, pp.1-11.