1 |
C. Guo, J. R. Gardner, Y. You, A. G. Wilson, and K. Q. Weinberger, "Simple Black-box Adversarial Attacks," in Proc. ICML 2019, Jun. 9-15, 2019.
|
2 |
K. Lee, K. Lee, H. Lee, and J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," in Proc. NIPS 2018, Dec. 3-8, 2018.
|
3 |
Z. Huang, and T. Zhang, "Black-Box Adversarial Attack with Transferable Model-based Embedding," in Proc. ICLR 2020, Apr. 26-May 1, 2020.
|
4 |
H. Kwon, H. Yoon, and K.-W. Park, "Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error," in Proc. IEEE 2 nd International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 136-139, Jun. 3-5, 2019.
|
5 |
A. Raghunathan, J. Steinhardt, and P. Liang, "Certified Defenses against Adversarial Examples," in Proc. ICLR 2018, Apr. 30-May 3, 2018.
|
6 |
A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, "Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks," in Proc. NIPS 2018, Dec. 3-8, 2018.
|
7 |
L. Zhu, Z. Liu, and S. Han, "Deep Leakage from Gradients," in Proc. NeurIPS 2019, Dec. 8-14, 2019.
|
8 |
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, "How To Backdoor Federated Learning," in Proc. 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Apr. 13- 15, 2021.
|
9 |
D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, "Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates," in Proc. ICML 2018, Jul 10-15, 2018.
|