Browse > Article
http://dx.doi.org/10.17661/jkiiect.2022.15.1.27

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition  

Lee, Youngseok (Department of Electronic Engineering, Chungwoon University)
Kim, Jongweon (Department of Electronic Engineering, Sangmyung University)
Publication Information
The Journal of Korea Institute of Information, Electronics, and Communication Technology / v.15, no.1, 2022 , pp. 27-35 More about this Journal
Abstract
In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.
Keywords
Deep learning; Adversarial attack; FGSM attack; BIM attack; CW attack; White noise; Perturbation;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Alzubaidi, Laith, et al. "Review of deep learning: Concepts, CNN architectures challenges, applications, future directions," Journal of big Data, Vo; 8, no. 1pp. 1-74, 2021.   DOI
2 W. Sultani, J. Choi,"Abnormal traffic detection using intelligent driver model," 2010 20th Int. Conf. Pattern Recognit. pp. 324-327, 2010.
3 Huang, Sandy, et al. "Adversarial attacks on neural network policies," arXiv preprint arXiv:1702.02284, 2017.
4 N. Carlini and David Wagner. "Towards evaluating the robustness of neural networks," 2017 ieee symposium on security and privacy (sp). IEEE, 2017.
5 Wiyatno, Rey Reza, et al. "Adversarial examples in modern machine learning: A review," arXiv preprint arXiv:1911.05268, 2019.
6 Chakraborty, Anirban, "Adversarial attacks and defences: A survey," arXiv preprint arXiv:1810.00069, 2018.
7 N. Papernot, M. McDaniel, and I. Goodfellow. "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples," arXiv :1605.07277, 2016.
8 Singh, Kanwar Bharat, and Mustafa Ali Arat. "Deep learning in the automotive industry: Recent advances and application examples," arXiv preprint :1906.08834, 2019.
9 Targ, Sasha, Diogo Almeida, and Kevin Lyman. "Resnet in resnet: Generalizing residual architectures," arXiv:1603.08029, 2016.
10 Kurakin Alexey, Ian Goodfellow, and @Samy Bengio. "Adversarial examples in the physical world," 2016.
11 Sanglee Park and Jungmin So, "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification," Applied Science, Vol. 10, pp. 2-16, 2020.   DOI
12 I. Goodfellow, S, Jonathon Shlens, and C. Szegedy. "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.