Browse > Article
http://dx.doi.org/10.13089/JKIISC.2019.29.1.117

Security Vulnerability Verification for Open Deep Learning Libraries  

Jeong, JaeHan (Department of Computer Engineering, Ajou University)
Shon, Taeshik (Department of Computer Engineering, Ajou University)
Abstract
Deep Learning, which is being used in various fields recently, is being threatened with Adversarial Attack. In this paper, we experimentally verify that the classification accuracy is lowered by adversarial samples generated by malicious attackers in image classification models. We used MNIST dataset and measured the detection accuracy by injecting adversarial samples into the Autoencoder classification model and the CNN (Convolution neural network) classification model, which are created using the Tensorflow library and the Pytorch library. Adversarial samples were generated by transforming MNIST test dataset with JSMA(Jacobian-based Saliency Map Attack) and FGSM(Fast Gradient Sign Method). When injected into the classification model, detection accuracy decreased by at least 21.82% up to 39.08%.
Keywords
Adversarial attack; MNIST; deep learning; security; autoencoder; convolution neural network;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Papernot, Nicolas, et al. "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, pp. 506-519, Apr. 2017.
2 Zhang, Guoming, et al. "Dolphin Attack: Inaudible voice commands." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, pp.103-117, Oct. 2017.
3 Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." arXiv preprint arXiv:1607.02533. 2016.
4 Finlayson, Samuel G., Isaac S. Kohane, and Andrew L. Beam. "Adversarial Attacks Against Medical Deep Learning Systems." arXiv preprint arXiv:1804.05296 .2018.
5 Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
6 Papernot, Nicolas, et al. "The limitations of deep learning in adversarial settings." Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, pp.372-387, Mar. 2016.
7 Papernot, Nicolas, et al. "cleverhans v2. 0.0: an adversarial machine learning library." arXiv preprint arXiv:1610.00768 . 2016..
8 Vincent, Pascal, et al. "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion." Journal of machine learning research 11. pp.3371-3408, Dec. 2010.
9 Heckerman, David, and Christopher Meek. "Models and selection criteria for regression and classification." Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc, pp.223-228, Aug. 1997.
10 Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems, pp.1097-1105. 2012.
11 Bottou, Leon, et al. "Comparison of classifier methods: a case study in handwritten digit recognition." Pattern Recognition, 1994. Vol. 2-Conference B: Computer Vision & Image Processing., Proceedings of the 12th IAPR International. Conference on. Vol. 2. IEEE, pp.77-82, Oct. 1994.
12 Abadi, Martin, et al. "Tensorflow: a system for large-scale machine learning." OSDI. Vol. 16. pp.265-283, Nov. 2016.