Browse > Article
http://dx.doi.org/10.13089/JKIISC.2021.31.1.51

Analysis of Deep Learning Model Vulnerability According to Input Mutation  

Kim, Jaeuk (Information Security LAB, GSI, Yonsei University)
Park, Leo Hyun (Information Security LAB, GSI, Yonsei University)
Kwon, Taekyoung (Information Security LAB, GSI, Yonsei University)
Abstract
The deep learning model can produce false prediction results due to inputs that deviate from training data through variation, which leads to fatal accidents in areas such as autonomous driving and security. To ensure reliability of the model, the model's coping ability for exceptional situations should be verified through various mutations. However, previous studies were carried out on limited scope of models and used several mutation types without separating them. Based on the CIFAR10 data set, widely used dataset for deep learning verification, this study carries out reliability verification for total of six models including various commercialized models and their additional versions. To this end, six types of input mutation algorithms that may occur in real life are applied individually with their various parameters to the dataset to compare the accuracy of the models for each of them to rigorously identify vulnerabilities of the models associated with a particular mutation type.
Keywords
Deep learning; Mutation; Adversarial machine learning;
Citations & Related Records
연도 인용수 순위
  • Reference
1 He, Kaiming, et al., "Deep residual learning for image recognition." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, Jun., 2016.
2 Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Proceedings of the Advances in Neural Information Processing Systems (NIPS), 25, pp. 1097-1105, Dec., 2012.
3 Xiong, Wayne, et al. "Achieving human parity in conversational speech recognition." arXiv preprint arXiv: 1610.05256, 2016.
4 Julian, Kyle D., et al. "Policy compression for aircraft collision avoidance systems." Proceedings of the IEEE/AIAA Digital Avionics Systems Conference (DASC), pp. 1-10, Aug., 2016.
5 Bojarski, Mariusz, et al. "End to end learning for self-driving cars." arXiv preprint arXiv:1604.07316, 2016.
6 Yuan, Zhenlong, et al. "Droid-sec deep learning in android malware detection." Proceedings of the ACM Conference on SIGCOMM, pp. 371-372, Aug., 2014.
7 Xie, Xiaofei, et al. "Deephunter: A coverage-guided fuzz testing framework for deep neural networks." Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), pp. 146-157, Jul., 2019.
8 Ma, Lei, et al. "Deepmutation: Mutation testing of deep learning systems." Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE), pp. 100-111, Oct., 2018.
9 Who's responsible when an autonomou s car crashes?, https://money.cnn.com/2016/07/07/technology/tesla-liabilityrisk/index.html, 2016.
10 Tian, Yuchi, et al. "Deeptest: Automated testing of deep-neural-network-driven autonomous cars." Proceedings of the International Conference on Software Engineering (ICSE), pp. 303-314, May, 2018.
11 Madry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083, 2017.
12 Meng, Dongyu, and Hao Chen. "Magnet: a two-pronged defense against adversarial examples." Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 135-147, Oct., 2017.