1 |
L. Huang, A. D. Joseph, and B. Nelson, "Adversarial machine learning." In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pp. 43-58, October 2011.
|
2 |
A. Krizhevsky, I. Sutskever, and G. Hinton, "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems, vol.1, pp. 1097-1105, December, 2012.
|
3 |
H. Sak, A. Senior, and F. Beaufays, "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition." arXiv preprint arXiv:1402.1128, 2014.
|
4 |
H. Sak, A. Senior, and F. Beaufays, "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." Fifteenth annual conference of the international speech communication association, pp. 338-342, January 2014.
|
5 |
N. Papernot, P. McDaniel, and I. Goodfellow, "Transferability in machine learning from phenomena to black-box attacks using adversarial samples." arXiv preprint arXiv:1605.07277, 2016.
|
6 |
M. Abadi, P. Barham, and J. Chen, "Tensorlfow: A system for large-scale machine learning." 12th USENIX Symposium on Operating Systems Design and Implementation, pp. 265-283, November 2016.
|
7 |
L. Bottou, "Large-scale machine learning with stochastic gradient descent." Proceedings of COMPSTAT'2010. Physica-Verlag HD, pp. 177-186, August 2010.
|
8 |
D. Kingma, and J. Ba, "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980, 2014.
|
9 |
X. Gao, Y. Tan, and H. Jiang, "Boosting targeted black-box attacks via ensemble substitute training and linear augmentation." Applied Sciences, vol.9, no.11, pp. 2286 - 2300, June 2019.
DOI
|
10 |
R. Caruana, A. Niculescu-Mizil, and G. Crew, "Ensemble selection from libraries of models." Proceedings of the twenty-first international conference on Machine learning. ACM, pp. 18, July 2004.
|
11 |
Y. Liu, X. Chen, and C. Liu, "Delving into transferable adversarial examples and black-box attacks." arXiv preprint arXiv:1611.02770, 2016.
|
12 |
I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572, 2014.
|
13 |
B. Biggio, I. Corona, and D. Maiorca, "Evasion attacks against machine learning at test time." Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402, September 2013.
|
14 |
N. Papernot, P. McDaniel, and I. Goodfellow, "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519, April 2017.
|
15 |
S. Qiu, Q. Liu, and S. Zhou, "Review of artificial intelligence adversarial attack and defense technologies." Applied Sciences, vol. 9, no. 5, pp. 909-938, March 2019.
DOI
|
16 |
N. Papernot, P. McDaniel, and S. Jha, "The limitations of deep learning in adversarial settings." 2016 IEEE European Symposium on Security and Privacy, pp. 372-387, March 2016.
|
17 |
N. Carlini, and D. Wagner, "Towards evaluating the robustness of neural networks." 2017 IEEE Symposium on Security and Privacy, pp. 39-57, May 2017.
|
18 |
X. Yuan, P. He, and Q. Zhu, "Adversarial examples: attacks and defenses for deep learning." IEEE transactions on neural networks and learning systems, vol.9, no.5, pp. 2805-2824, January 2019.
|
19 |
D. Bertsekas, "Nonlinear programming." Journal of the Operational Research Society, vol.48, no.3, pp. 334, 1997.
DOI
|
20 |
C. Szegedy, W. Zaremba, and I. Sutskever, "Intriguing properites of neural networks." arXiv preprint arXiv:1312.6199, 2013.
|
21 |
Y. Lecun, P. Haffner, and L. Bottou, "Object recognition with gradient-based learning." Shape, contour and grouping in computer vision, Springer, Berlin, Heidelberg, pp. 319-345, 1999.
|