1 |
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," Int'l Conf. Learning Representation (ICLR), 2014.
|
2 |
S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, "Adversarial attacks on neural network policies," Int'l Conf. Learning Representation (ICLR), 2017.
|
3 |
B. Liang, H. Li, M. Su, P. Bian, X. Li, and W. Shi, "Deep text classification can be fooled," Int'l Joint Conf. Artificial Intelligent (IJCAI), 2018.
|
4 |
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," ACM Asia Conf. Comput. Comm. Security, 2017.
|
5 |
G. Philipp, and J. G. Carbonell, "The nonlinearity coefficient - predicting generalization in deep neural network," arXiv:1806.00179, 2018.
|
6 |
I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," Int'l Conf. Learning Representation (ICLR), 2015.
|
7 |
V. Zantedeschi, M. Nicolae, and A. Rawat, "Efficient defenses against adversarial attacks," ACM Workshop on Artificial Intelligence and Security (AISec), 2017.
|
8 |
C-J. Simon-Gabriel, Y. Oliver, L. Bottou, B. Scholkopf, and D. Lopez-Paz, "First-order adversarial vulnerability of neural networks and input dimension," Int'l Conf. Machine Learning (ICML), 2019.
|
9 |
S. M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Universal adversarial perturbations," Computer Vision and Pattern Recognition (CVPR), 2017.
|
10 |
A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, "Adversarial examples are not bugs, they are features," Neural Information Processing Systems (NeurIPS), 2019.
|
11 |
B. Biggio, and F. Roli, "Wild patterns: ten years after th rise of adversarial machine learning," Pattern Recog., vol. 84, pp. 317-331, 2018.
DOI
|
12 |
N. Papernot, P. Mcdaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, "The limitations of deep learning in adversarial settings," IEEE European Symposium on Security and Privacy (EuroS&P), 2016.
|
13 |
A. Kurakin and I. Goodfellow, "Adversarial examples in the physical world," Int'l Conf. Learning Representations (ICLR), 2017.
|
14 |
N. Carlini and D. Wagner, "Defensive Distillation is Not Robust to Adversarial Examples," arXiv:1607.04311, 2016.
|
15 |
J. Zhang, and C. Li, "Adversarial examples: opportunities and challenges," IEEE Trans. Neural Networks and Learning Systems, vol. 31, no. 7, pp. 2578-2593, 2020.
DOI
|
16 |
S.M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: a simple and accurate method to fool deep neural networks," Computer Vision and Pattern Recognition (CVPR), 2016.
|
17 |
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Neural Information Processing Systems (NIPS), 2012.
|
18 |
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Computer Vision and Pattern Recognition (CVPR), 2016.
|
19 |
김휘영, 정대철, 최병욱, "딥러닝 기반 의료 영상 인공지능 모델의 취약성: 적대적 공격," 대한영상의학회지, vol. 80, no. 2, pp.259-273, 2019.
|
20 |
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xia, and D. Song, "Robust physical-world attacks on deep learning visual classification," Computer Vision and Pattern Recognition (CVPR), 2018.
|
21 |
H. Hosseini, and R. Poovendran, "Semantic adversarial examples," Computer Vision and Pattern Recognition Workshop (CVPRW), 2018.
|
22 |
A. S. Shamsabadi, C. Oh, and A. Cavallaro, "Edgefool: an adversarial image enhancement filter," Int'l Conf. Acoustics, Speech and Signal Process. (ICASSP), 2020.
|
23 |
A. S. Shamsabadi, R. Sanchez-Matilla, and A. Cavallaro, "ColorFool: semantic adversarial colorization," Computer Vision and Pattern Recognition (CVPR), 2020.
|
24 |
D. Peng, Z. Zheng, L. Luo, and X. Zhang, "Structure matter: towrads generating trasnferable adversarial images," European Conf. Artificial Intelligent (ECAI), 2020.
|
25 |
A. Ghiasi, A. Shafahi, and T. Goldstein, "Breaking certified defenses: semantic adversarial examples with spoofed robustness certificates," Int'l Conf. Learning Representation (ICLR), 2020.
|
26 |
R. Duan, X. Ma, Y. Wang, J. Bailey, A. K. Qin, and Y. Yun, "Adversarial camouflage: hiding physical-world attacks with natural styeles," Computer Vision and Pattern Recognition (CVPR), 2020.
|
27 |
C. Xiao, B. Li, W. He, M. Liu, and D. Song, "Generating adversarial examples with adversarial networks," Int'l Joint Conf. Artificial Intelligent (IJCAI), 2018.
|
28 |
T. Brown, D. Mane, A. Roy, M. Adabi, and J. Gilmer, "Adversarial patch," Neural Information Processing Systems (NIPS) Workshop, 2017.
|