Browse > Article

인공지능 보안 공격 및 대응 방안 연구 동향  

Ryu, Gwonsang (숭실대학교 대학원 융합소프트웨어학과)
Choi, Daeseon (숭실대학교 소프트웨어학부)
Keywords
Citations & Related Records
연도 인용수 순위
  • Reference
1 K. He, X. Zhang, S.Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," European Conference on Computer Vision, Springer, pp. 630-645, 2016.
2 C. Szegedy, S. Ioffe, and V. Vanhoucke, "Inception-v4, Inception-Resnet and The Impact of Residual Connections on Learning," arXiv preprint arXiv:1602.07261, 2016.
3 K. He, G. Gkioxari, P. Dollar, and R. Girshick, "Mask R-CNN," Proceedings of the IEEE International Conference on Computer Vision, IEEE, pp. 2961-2969, 2017.
4 T. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, "Focal Loss for Dense Object Detection," Proceedings of the IEEE International Conference on Computer Vision, IEEE, pp. 2980-2988, 2017.
5 A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is All you Need," Proceedings of the Neural Information Processing Systems, pp. 5998-6008, 2017.
6 J. Matthew, A. Oprea, B. Biggio, C. Liu, C.N. Rotaru, and B. Li, "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning," 2018 IEEE Symposium on Security and Privacy, IEEE, pp. 19-35, 2018.
7 Y. Yao, H. Li, H. Zheng, and B.Y. Zhao, "Latent Backdoor Attacks on Deep Neural Networks," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 2041-2055, 2019.
8 C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing Properties of Neural Networks," arXiv preprint arXiv:1312.6199, 2013.
9 I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and Harnessing Adversarial Examples," arXiv preprint arXiv:1412.6572, 2014.
10 S.M Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a Simple and Accurate Method to Fool Deep Neural Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 2574-2582, 2016.
11 N. Carlini, D. Wagner, "Towards Evaluating the Robustness of Neural Networks," 2017 IEEE Symposium on Security and Privacy, IEEE, pp. 39-57, 2017.
12 Z. Yang, J. Zhang, E.C. Chang, and Z. Liang, "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 225-240, 2019.
13 L. Song, R. Shokri, and P. Mittal, "Privacy Risks of Securing Machine Learning Models against Adversarial Examples," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 241-257, 2019.
14 H. Kwon, H. Yoon, and D. Choi, "Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example," IEEE Access, 7, pp. 60908-60919, 2019.   DOI
15 W. Xu, D. Evans, and Y. Qi, "Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks," Symposium on Network and Distributed Systems Security, 2018.
16 M. Naseer, S. Khan, M. Hayat, F.S. Khan, and F. Porikli, "A Self-supervised Approach for Adversarial Robustness," roceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 262-271, 2020.
17 J. Jia, A. Salem, M. Backes, Y. Zhang, and N.Z. Gong, "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 259-274, 2019.
18 S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, and B.Y. Zhao, "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models," In 29th USENIX Security Symposium, pp. 1589-1604, 2020.