DOI QR코드

DOI QR Code

Autonomous Battle Tank Detection and Aiming Point Search Using Imagery

영상정보에 기초한 전차 자율탐지 및 조준점탐색 연구

  • Received : 2018.01.23
  • Accepted : 2018.04.16
  • Published : 2018.06.30

Abstract

This paper presents an autonomous detection and aiming point computation of a battle tank by using RGB images. Maximally stable extremal regions algorithm was implemented to find features of the tank, which are matched with images extracted from streaming video to figure out the region of interest where the tank is present. The median filter was applied to remove noises in the region of interest and decrease camouflage effects of the tank. For the tank segmentation, k-mean clustering was used to autonomously distinguish the tank from its background. Also, both erosion and dilation algorithms of morphology techniques were applied to extract the tank shape without noises and generate the binary image with 1 for the tank and 0 for the background. After that, Sobel's edge detection was used to measure the outline of the tank by which the aiming point at the center of the tank was calculated. For performance measurement, accuracy, precision, recall, and F-measure were analyzed by confusion matrix, resulting in 91.6%, 90.4%, 85.8%, and 88.1%, respectively.

본 논문은 지상무인전투체계 중의 하나인 무인경전투차량이 RGB 영상정보를 기초로 스스로 전차를 탐지하고 조준점을 산출하는 지능형 표적획득/처리기술의 기초연구이다. 무인 경전투 차량이 전장에서 적 전차와 조우 시, 적 전차를 자율적으로 탐지하고 스스로 조준하는 방법을 개발하기 위해, 영상정보로부터 전차의 주요특징을 식별 및 추출하고, Maximally stable extremal regions, 중간값 필터, k평균 클러스터링 그리고 Morphological filtering의 영상처리기법 및 인공지능 알고리즘을 통해 전차의 외형정보를 추출 및 분석하였으며, 식별된 전차 외형정보를 벡터화하여 전차의 중앙을 지향하는 조준점을 산출하였다. 또한, 본 연구의 성능을 측정하기 위해 선진국들의 주력전차의 영상정보를 수집 및 분석하였고, 제안한 방법의 객관적인 전차탐지 성능은 정확도 91.6%, 정밀도 90.4%, 재현율 85.8% 그리고 F-measure 88.1%의 결과를 보여주었다. 본 연구가 무인전투체계의 지능형 표적획득/처리기술 연구개발에 도움이 되기를 기대한다.

Keywords

References

  1. L. Hungon, Defense Science & Technology Development Trend and Level vol. 1. Defense Agency for Technology and Quality: Defense Agency for Technology and Quality, 2016.
  2. J. Schroeder, "Future Combat Systems," DEPARTMENT OF THE ARMY WASHINGTON DC2001.
  3. K. Yull-Hui, C. Yong-Hoon, and K. Jin-Oh, "How to Derive the Autonomous Driving Function Level of Unmanned Ground Vehicles - Focusing on Defense Robots," The Journal of The Korean Institute of Communication Sciences, vol. 42, pp. 205-213, 1 2017. https://doi.org/10.7840/kics.2017.42.1.205
  4. L. Jaeyoung, L. Jongwoo, Y. Sanhyun, and K. Juhui, Military Robot vol. 1. Korea Military Academy: BooksHill, 2017.
  5. R. Sparrow, "Robotic weapons and the future of war," New wars and new soldiers: Military ethics in the contemporary world, pp. 117-133, 2011.
  6. H. Ju-Heyon, P. Sanghyuk, P. Sang-Sup, and R. Chang-Kyung, "Aiming Point Correction Technique for Ship-launched Anti-air Missiles Considering Ship Weaving Motion," Journal of Institute of Control, Robotics and Systems, vol. 20, pp. 94-100, 1 2014. https://doi.org/10.5302/J.ICROS.2014.13.1870
  7. J. Kyung-Hyun, K. Si-Hyun, L. Young-Cheol, and P. Byung-Suh, "An Image based Aiming Point Estimation Method for Laser Weapon System," 2015, pp. 492-493.
  8. M. Sonka, V. Hlavac, and R. Boyle, Image processing, analysis, and machine vision: Cengage Learning, 2014.
  9. J.-H. Kim, Y. Sung, and B. Y. Lattimer, "Bayesian estimation based real-time fire-heading in smoke- filled indoor environments using thermal imagery," in Robotics and Automation (ICRA), 2017 IEEE International Conference on, 2017, pp. 5231-5236.
  10. L. Saac and C. Jae-Soo, "Tracking and Recognition of vehicle and pedestrian for intelligent multi-visual surveillance systems," Journal of the Korea Institute of Information and Communication Engineering, vol. 19, pp. 435-442, 2 2015. https://doi.org/10.6109/jkiice.2015.19.2.435
  11. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779-788.
  12. J. Matas, O. Chum, M. Urban, and T. Pajdla, "Robust wide-baseline stereo from maximally stable extremal regions," Image and vision computing, vol. 22, pp. 761-767, 2004. https://doi.org/10.1016/j.imavis.2004.02.006
  13. K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, et al., "A comparison of affine region detectors," International journal of computer vision, vol. 65, pp. 43-72, 2005. https://doi.org/10.1007/s11263-005-3848-x
  14. M. Fuad, M. Aqil, A. Ghani, M. Ruddin, R. Ghazali, M. F. Sulaima, et al., "A Review on Methods of Identifying and Counting Aedes Aegypti Larvae using Image Segmentation Technique," Telkomnika, vol. 15, 2017.
  15. C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas, "Randomized dimensionality reduction for k-means clustering," IEEE Transactions on Information Theory, vol. 61, pp. 1045-1062, 2015. https://doi.org/10.1109/TIT.2014.2375327
  16. J. Sharadkumar and K. Suvarna, "Morphological Image Processing," International Journal in IT & Engineering, vol. 3, pp. 1-7, 2015.
  17. I. Sobel, "An isotropic $3{\pi}$ 3 image gradient operator," Machine vision for three-dimensional scenes, pp. 376-379, 1990.
  18. J.-H. Kim and B. Y. Lattimer, "Real-time probabilistic classification of fire and smoke using thermal imagery for intelligent firefighting robot," Fire Safety Journal, vol. 72, pp. 40-49, 2015. https://doi.org/10.1016/j.firesaf.2015.02.007
  19. J.-H. Kim, S. Jo, and B. Y. Lattimer, "Feature Selection for Intelligent Firefighting Robot Classification of Fire, Smoke, and Thermal Reflections Using Thermal Infrared Images," Journal of Sensors, vol. 2016, p. 13, 2016.