Evasion attack에 대한 인공지능 보안이슈

  • Published : 2018.02.26

Abstract

Keywords

References

  1. Schmidhuber, Jurgen. "Deep learning in neural networks: An overview." Neural Networks 61 85-117, 2015 https://doi.org/10.1016/j.neunet.2014.09.003
  2. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  3. Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29:6 82-97, 2012 https://doi.org/10.1109/MSP.2012.2205597
  4. Potluri, Sasanka, and Christian Diedrich. "Accelerated deep neural networks for enhanced Intrusion Detection System." 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation, 2016.
  5. Bishop, Christopher M. Neural networks for pattern recognition. Oxford university press, 1995.
  6. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  7. Papernot, Nicolas, Patrick McDaniel, and Ian Goodfellow. "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples." arXiv preprint arXiv : 1605.0 7277, 2016.
  8. Papernot, Nicolas, et al. "Distillation as a defense to adversarial perturbations against deep neural networks." Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.
  9. Papernot, Nicolas, et al. "The limitations of deep learning in adversarial settings." 2016 IEEE European Symposium on Security and Privacy, 2016.
  10. Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. "Deepfool: A simple and accurate method to fool deep neural networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  11. Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 IEEE Symposium on Security and Privacy, 2017.
  12. Papernot, Nicolas, et al. "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 2017.
  13. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  14. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. ICLR, abs/1611.02770, 2017.
  15. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. ICLR Workshop, 2017.