DOI QR코드

DOI QR Code

Transformer-Based MUM-T Situation Awareness: Agent Status Prediction

트랜스포머 기반 MUM-T 상황인식 기술: 에이전트 상태 예측

  • Received : 2023.05.31
  • Accepted : 2023.09.13
  • Published : 2023.11.30

Abstract

With the advancement of robot intelligence, the concept of man and unmanned teaming (MUM-T) has garnered considerable attention in military research. In this paper, we present a transformer-based architecture for predicting the health status of agents, with the help of multi-head attention mechanism to effectively capture the dynamic interaction between friendly and enemy forces. To this end, we first introduce a framework for generating a dataset of battlefield situations. These situations are simulated on a virtual simulator, allowing for a wide range of scenarios without any restrictions on the number of agents, their missions, or their actions. Then, we define the crucial elements for identifying the battlefield, with a specific emphasis on agents' status. The battlefield data is fed into the transformer architecture, with classification headers on top of the transformer encoding layers to categorize health status of agent. We conduct ablation tests to assess the significance of various factors in determining agents' health status in battlefield scenarios. We conduct 3-Fold corss validation and the experimental results demonstrate that our model achieves a prediction accuracy of over 98%. In addition, the performance of our model are compared with that of other models such as convolutional neural network (CNN) and multi layer perceptron (MLP), and the results establish the superiority of our model.

Keywords

Acknowledgement

This work was supported by Korea Research Institute for defense Technology planning and advancedment (KRIT) grant funded by the Korea government (DAPA (Defense Acquisition Program Administration)) (No. 21-107-E00-009-02, "Realtime complex battlefield situation awareness technology")

References

  1. C.-E. Lee, J. Baek, J. Son, and Y.-G. Ha, " Deep AI military staff: Cooperative battlefield situation awareness for commander's decision making," The Journal of Supercomputing, vol. 79, no. 6, pp. 6040-6069, Apr., 2023, DOI: 10.1007/s11227-022-04882-w. 
  2. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017, [Online], https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. 
  3. Y. Li, "Deep reinforcement learning: An overview," arXiv, 2017, [Online], https://arxiv.org/abs/1701.07274. 
  4. DARPA, "Defense Advanced Research Projects Agency," [Online], https://www.darpa.mil, Accessed: 09 23, 2023. 
  5. Association of Research Libraries, "Establish a Universal, Open Library or Digital Data Commons," [Online], https://www.arl.org/resources/establish-a-universal-open-library-or-digital-data-commons/, Accessed: 09 23, 2023. 
  6. W. Guo, J. Wang, and S. Wang, "Deep Multimodal Representation Learning: A Survey," IEEE Access, vol. 7, pp. 63373-63394, May, 2019, DOI: 10.1109/ACCESS.2019.2916887. 
  7. R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamm, M. S. Bernstein, and L. Fei-Fei, "Visual genome: Connecting language and vision using crowdsourced dense image annotations," International journal of computer vision, vol. 123, pp. 32-73, Feb., 2017, DOI: 10.1007/s11263-016-0981-7. 
  8. Z. Wang and J.-C. Liu, "Translating math formula images to latex sequences using deep neural networks with sequence-level training," International Journal on Document Analysis and Recognition (IJDAR), vol. 24, no. 1-2, pp. 63-75, Noc., 2021, DOI: 10.1007/s10032-020-00360-2. 
  9. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," arXiv:1810.04805, 2018, [Online], https://arxiv.org/abs/1810.04805.