DOI QR코드

DOI QR Code

Camouflaged Adversarial Patch Attack on Object Detector

객체탐지 모델에 대한 위장형 적대적 패치 공격

  • Jeonghun Kim (Advanced Defense Science & Technology Research Institute, Agency for Defense Development) ;
  • Hunmin Yang (Advanced Defense Science & Technology Research Institute, Agency for Defense Development) ;
  • Se-Yoon Oh (Advanced Defense Science & Technology Research Institute, Agency for Defense Development)
  • 김정훈 (국방과학연구소 국방첨단과학기술연구원) ;
  • 양훈민 (국방과학연구소 국방첨단과학기술연구원) ;
  • 오세윤 (국방과학연구소 국방첨단과학기술연구원)
  • Received : 2022.10.28
  • Accepted : 2023.01.31
  • Published : 2023.02.05

Abstract

Adversarial attacks have received great attentions for their capacity to distract state-of-the-art neural networks by modifying objects in physical domain. Patch-based attack especially have got much attention for its optimization effectiveness and feasible adaptation to any objects to attack neural network-based object detectors. However, despite their strong attack performance, generated patches are strongly perceptible for humans, violating the fundamental assumption of adversarial examples. In this paper, we propose a camouflaged adversarial patch optimization method using military camouflage assessment metrics for naturalistic patch attacks. We also investigate camouflaged attack loss functions, applications of various camouflaged patches on army tank images, and validate the proposed approach with extensive experiments attacking Yolov5 detection model. Our methods produce more natural and realistic looking camouflaged patches while achieving competitive performance.

Keywords

References

  1. Rey Reza Wiyatno et al., "Adversarial Examples in Modern Machine Learning: A Review," arXiv preprint arXiv:1911.05268, 2019.
  2. Eykholt, Kevin, et al., "Robust Physical-World Attacks on Deep Learning Visual Classification," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  3. Den Hollander, Richard, et al., "Adversarial Patch Camouflage Against Aerial Detection," Artificial Intelligence and Machine Learning in Defense Applications II, Vol. 11543, International Society for Optics and Photonics, 2020.
  4. Zhang, Yang, et al., "CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild," International Conference on Learning Representations, 2018.
  5. Wu, Tong, et al., "Physical Adversarial Attack on Vehicle Detector in the Carla Simulator," arXiv preprint arXiv:2007.16118, 2020.
  6. Duan, Ranjie, et al., "Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  7. Lovisotto, Giulio, et al., "SLAP: Improving Physical Adversarial Examples with {Short-Lived} Adversarial Perturbations," 30th USENIX Security Symposium (USENIX Security 21). 2021.
  8. Sayles, Athena, et al., "Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  9. Tu, James, et al., "Physically Realizable Adversarial Examples for Lidar Object Detection," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
  10. Mariani, Giorgio, et al., "Generating Adversarial Surfaces via Band-Limited Perturbations," Computer Graphics Forum, Vol. 39, No. 5, 2020.
  11. Liu, Hsueh-Ti Derek, et al., "Adversarial Geometry and Lighting Using a Differentiable Renderer," CoRR, abs/1808.02651, 2018.
  12. Alexander Toet and Maarten A Hogervorst, "Review of Camouflage Assessment Techniques," In Target and Background Signatures VI, Volume 11536, page 1153604. International Society for Optics and Photonics, 2020.
  13. Chiuhsiang Joe Lin, Chi-Chan Chang, and Yung-Hui Lee, "Evaluating Camouflage Design Using Eye Movement Data," Applied Ergonomics, 45(3):714- 723, 2014. https://doi.org/10.1016/j.apergo.2013.09.012
  14. Chi-Chan Chang, Yung-Hui Lee, Chiuhsiang Joe Lin, Bor-Shong Liu, and Yuh-Chuan Shih, "Visual Assessment of Camouflaged Targets with Different Background Similarities," Perceptual and Motor Skills, 114(2):527-541, 2012. https://doi.org/10.2466/24.PMS.114.2.527-541
  15. Simen Thys, Wiebe Van Ranst, and Toon Goedem, "Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0-0, 2019.
  16. Kim, Jeonghun, et al., "Camouflaged Adversarial Attack on Object Detector," 2021 21st International Conference on Control, Automation and Systems (ICCAS). IEEE, 2021.
  17. Glenn Jocher, Alex Stoken, Jirka Borovec, NanoCode012, ChristopherSTAN, Liu Changyu, Laughing, tkianai, Adam Hogan, lorenzomammana, yxNONG, AlexWang1900, Laurentiu Diaconu, Marc, wanghaoyang0106, ml5ah, Doug, Francisco Ingham, Frederik, Guilhen, Hatovix, Jake Poznanski, Jiacong Fang, Lijun Yu, changyu98, Mingyu Wang, Naman Gupta, Osama Akhtar, PetrDvoracek, and Prashant Rai. ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements, October 2020.