Browse > Article
http://dx.doi.org/10.13089/JKIISC.2020.30.5.871

Perceptual Ad-Blocker Design For Adversarial Attack  

Kim, Min-jae (Korea University)
Kim, Bo-min (Korea University)
Hur, Junbeom (Korea University)
Abstract
Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.
Keywords
Adversarial example; Perceptual Ad-Blocker; Defense-GAN; MagNet;
Citations & Related Records
연도 인용수 순위
  • Reference
1 C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013.
2 I.J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
3 F. Tramer, P. Dupre, G. Rusak, G. Pellegrino, and D. Boneh, "Adversarial: perceptual ad blocking meets adversarial machine learning," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 2005-2021, Nov. 2019.
4 P. Samangouei, M. Kabkab and R. Chellappa, "Defense-gan: protecting classifiers against adversarial attacks using generative models," arXiv preprint arXiv:1805.06605, 2018.
5 D. Meng and H. Chen, "Magnet: a two-pronged defense against adversarial examples," Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp. 135-147, Oct. 2017.
6 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, pp. 2672-2680, 2014.
7 Z. Hussain, M. Zhang, X. Zhang, K. Ye, C. Thomas, Z. Agha, N. Ong and A. Kovashka, "Automatic understanding of image and video advertisements," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1705-1715, 2017.
8 Z.A. Din, P. Tigas, S.T. King and B. Livshits, "Percival: making in-browser perceptual ad blocking practical with deep learning," 2020 USENIX Annual Technical Conference, pp. 387-400, 2020.
9 Kobaco, "Broadcasting and Communications Advertising Expenses Survey," https://adstat.kobaco.co.kr/sub/expenditure_data_search.do, Feb. 2020.
10 A. Kurakin, I. Goodfellow and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016.
11 N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," IEEE symposium on security and privacy, pp. 39-57, May 2017.
12 A. Madry, A. Makelov, L. Schmidt, D. Tsipras and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017.
13 S.M. Moosavi-Dezfooli, A. Fawzi and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582, 2016.
14 K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
15 A. Athalye, N. Carlini and D. Wagner, "Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples," International Conference on Machine Learning, pp. 274-283, Jul. 2018.
16 S. Qiu, Q. Liu, S. Zhou and C. Wu, "Review of artificial intelligence adversarial attack and defense technologies," Applied Sciences, vol. 9, no. 5, pp. 909, 2019.   DOI
17 A. Krizhevsky and G. Hinton, Learning multiple layers of features from tiny images, University of Toronto, 2009.
18 Y. LeCun, "The mnist database of handwritten digits," http://yann.lecun.com/exdb/mnist/, 1998.
19 H. Xiao, K. Rasul and R. Vollgraf, "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms," arXiv preprint arXiv:1708.07747, 2017.
20 M. Arjovsky, S. Chintala and L. Bottou, "Wasserstein generative adversarial networks," Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 214-223, Aug. 2017.