• Title/Summary/Keyword: 자율 공중 교전

Search Result 2, Processing Time 0.016 seconds

Design of an Autonomous Air Combat Guidance Law using a Virtual Pursuit Point for UCAV (무인전투기를 위한 가상 추적점 기반 자율 공중 교전 유도 법칙 설계)

  • You, Dong-Il;Shim, Hyunchul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.42 no.3
    • /
    • pp.199-212
    • /
    • 2014
  • This paper describes an autonomous air combat guidance law using a Virtual Pursuit Point (VPP) in one-on-one close engagement for Unmanned Combat Aerial Vehicle (UCAV). The VPPs that consist of virtual lag and lead points are introduced to carry out tactical combat maneuvers. The VPPs are generated based on fighter's aerodynamic performance and Basic Fighter Maneuver (BFM)'s turn circle, total energy and weapon characteristics. The UCAV determines a single VPP and executes pursuit maneuvers based on a smoothing function which evaluates probabilities of the pursuit types for switching maneuvers with given combat states. The proposed law is demonstrated by high-fidelity real-time combat simulation using commercial fighter model and X-Plane simulator.

Two Circle-based Aircraft Head-on Reinforcement Learning Technique using Curriculum (커리큘럼을 이용한 투서클 기반 항공기 헤드온 공중 교전 강화학습 기법 연구)

  • Insu Hwang;Jungho Bae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.4
    • /
    • pp.352-360
    • /
    • 2023
  • Recently, AI pilots using reinforcement learning are developing to a level that is more flexible than rule-based methods and can replace human pilots. In this paper, a curriculum was used to help head-on combat with reinforcement learning. It is not easy to learn head-on with a reinforcement learning method without a curriculum, but in this paper, through the two circle-based head-on air combat learning technique, ownship gradually increase the difficulty and become good at head-on combat. On the two-circle, the ATA angle between the ownship and target gradually increased and the AA angle gradually decreased while learning was conducted. By performing reinforcement learning with and w/o curriculum, it was engaged with the rule-based model. And as the win ratio of the curriculum based model increased to close to 100 %, it was confirmed that the performance was superior.