DOI QR코드

DOI QR Code

Adversarial Attacks for Deep Learning-Based Infrared Object Detection

딥러닝 기반 적외선 객체 검출을 위한 적대적 공격 기술 연구

  • Kim, Hoseong (The 1st Research and Development Institute, Agency for Defense Development) ;
  • Hyun, Jaeguk (The 1st Research and Development Institute, Agency for Defense Development) ;
  • Yoo, Hyunjung (The 1st Research and Development Institute, Agency for Defense Development) ;
  • Kim, Chunho (The 1st Research and Development Institute, Agency for Defense Development) ;
  • Jeon, Hyunho (Satellite & Space Exploration Systems Engineering and Architecture R&D Division, Korea Aerospace Research Institute)
  • 김호성 (국방과학연구소 미사일연구원) ;
  • 현재국 (국방과학연구소 미사일연구원) ;
  • 유현정 (국방과학연구소 미사일연구원) ;
  • 김춘호 (국방과학연구소 미사일연구원) ;
  • 전현호 (한국항공우주연구원 위성우주탐사체계설계부)
  • Received : 2021.06.23
  • Accepted : 2021.11.19
  • Published : 2021.12.05

Abstract

Recently, infrared object detection(IOD) has been extensively studied due to the rapid growth of deep neural networks(DNN). Adversarial attacks using imperceptible perturbation can dramatically deteriorate the performance of DNN. However, most adversarial attack works are focused on visible image recognition(VIR), and there are few methods for IOD. We propose deep learning-based adversarial attacks for IOD by expanding several state-of-the-art adversarial attacks for VIR. We effectively validate our claim through comprehensive experiments on two challenging IOD datasets, including FLIR and MSOD.

Keywords

References

  1. J. Gu et al., "Recent Advances in Convolutional Neural Networks," Pattern Recognition, Vol. 77, pp. 354-377, 2018. https://doi.org/10.1016/j.patcog.2017.10.013
  2. I. Goodfellow et al., "Explaining and Harnessing Adversarial Examples," International Conference on Learning Representations, 2015.
  3. C. Szegedy et al., "Intriguing Properties of Neural Networks," International Conference on Learning Representations, 2014.
  4. S. Ren et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39, No. 6, pp. 1137-1149, 2017. https://doi.org/10.1109/TPAMI.2016.2577031
  5. J. Deng et al., "ImageNet: A Large-Scale Hierarchical Image Database," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009.
  6. T. Lin et al., "Microsoft COCO: Common objects in context," European Converence on Computer Vision (ECCV), pp. 740-755, 2014.
  7. M. Tan et al., "EfficientDet: Scalable and Efficient Object Detection," IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10778-10787, 2020.
  8. A. Bochkovskiy et al., "YOLOv4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:2004.10934, 2020.
  9. T. Karasawa et al., "Multispectral Object Detection for Autonomous Vehicles," Proceedings of the on Thematic Workshops of ACM Multimedia, pp. 35-43, 2017.
  10. D. Feng et al., "Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges," IEEE Transactions on Intelligent Transportation Systems, pp. 1-20, 2020.
  11. "FREE FLIR Thermal Dataset for Algorithm Training," FLIR Systems. Accessed April 1, 2021. https://flir.com/oem/adas/adas-dataset-form/.
  12. J. Rony et al., "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4322-4330, 2019.
  13. A. Kurakin et al., "Adversarial Examples in the Physical World," International Conference on Learning Representations(Workshop Track Proceedings), 2017.
  14. A. Mardy et al., "Towards Deep Learning Models Resistant to Adversarial Attacks," International Conference on Learning Representations, 2018.
  15. U. Jang et al., "Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning," Proceedings of the Annual Computer Security Applications Conference, pp. 262-277, 2017.
  16. H. Hosseini et al., "On the Limitation of Convolutional Neural Networks in Recognizing Negative Images," IEEE International Conference on Machine Learning and Applications, pp. 352-358, 2017.
  17. T. Miyato et al., "Distributional Smoothing with Virtual Adversarial Training," International Conference on Learning Representations, 2016.
  18. S. Moosavi-Dezfooli et al., "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2574-2582, 2016.
  19. J. Rauber et al., "Fast Differentiable Clipping-Aware Normalization and Rescaling," arXiv preprint arXiv: 2007.07677, 2020.
  20. N. Carlini et al., "Towards Evaluating the Robustness of Neural Networks," IEEE Symposium on Security and Privacy, pp. 39-57, 2017.
  21. C. Xie et al., "Adversarial Examples for Semantic Segmentation and Object Detection," Proceedings of the IEEE International Conference on Computer Vision, pp. 1369-1378, 2017.
  22. Y. Li et al., "Robust Adversarial Perturbation on Deep Proposal-based Models," British Machine Vision Conference, 2018.
  23. X. Liu et al., "DPatch: An Adversarial Patch Attack on Object Detectors," AAAI Workshop on Artificial Intelligence Safety, 2019.
  24. R. Selvaraju et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization," Proceedings of the IEEE International Conference on Computer Vision, pp. 618-626, 2017.