Browse > Article
http://dx.doi.org/10.13089/JKIISC.2022.32.6.1081

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model  

Seo, Seong-gwan (Sejong University)
Son, Baehoon (Sejong University)
Yun, Joobeom (Sejong University)
Abstract
The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.
Keywords
Black-box attack; Adversarial attack; Face recognition model;
Citations & Related Records
연도 인용수 순위
  • Reference
1 N. Papernot, P McDaniel, and S. Jha et al., "The limitations of deep learning in adversarial settings" 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372-387, March 2016.
2 A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083, 2017.
3 W. Brendel, J. Rauber, and M. Bethge, "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models" International Conference on Learning Representations, May 2018.
4 J. Chen, M. Jordan, and M. Wainwright, "HopSkipJumpAttack: A Query-Efficient Decision-Based Attack" 2020 IEEE Symposium on Security and Privacy (SP), pp. 1277-1294, May 2020.
5 W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, "SphereFace: Deep hypersphere embedding for face recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 6738-6746.
6 K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv:1512.03385, 2015.
7 J. Deng, J. Guo, and S. Zafeiriou, "ArcFace: Additive angular margin loss for deep face recognition," arXiv:1801.07698, 2018.
8 N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks." 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57, May 2017.
9 I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples." arXiv preprint arXiv: 1412.6572, 2014.
10 S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: a simple and accurate method to fool deep neural networks" 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574-2582, June 2016.
11 A. Athalye and I. Sutskever, "Synthesizing Robust Adversarial Examples," International Conference on Machine Learning (ICML), pp. 284-293, July 2018.
12 A. Athalye, N. Carlini, and D. Wagner, "Synthesizing Robust Adversarial Examples" International Conference on Machine Learning (ICML), pp. 274-283, July 2018.
13 G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, 2007. 1, 2, 4
14 J. Su and D. Vasconcellos et al., "One pixel attack for fooling deep neural networks", VOL. 23, NO. 5, pp. 828-841, October 2019.   DOI
15 P. Chen, Huan Zhang, and Y. Sharma et al., "ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models" 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 15-26, November 2017.
16 N. Papernot, P. McDaniel, I. Goodfellow et al. "Practical Black-Box Attacks against Machine Learning" 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS'17, PP. 506-519, April 2017.
17 K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556, 2014
18 C. Szegedy, W. Zaremba, and I. Sutskever, "Intriguing properites of neural networks." arXiv preprint arXiv:1312.6199, 2013.