Acknowledgement
This work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by ICT R&D program of MSIT/IITP [2021-0-00193, Development of photorealistic digital human creation and 30fps realistic rendering technology].
References
- C. Guo, J. R. Gardner, Y. You, A. G. Wilson, and K. Q. Weinberger, "Simple Black-box Adversarial Attacks," in Proc. ICML 2019, Jun. 9-15, 2019.
- K. Lee, K. Lee, H. Lee, and J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," in Proc. NIPS 2018, Dec. 3-8, 2018.
- Z. Huang, and T. Zhang, "Black-Box Adversarial Attack with Transferable Model-based Embedding," in Proc. ICLR 2020, Apr. 26-May 1, 2020.
- H. Kwon, H. Yoon, and K.-W. Park, "Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error," in Proc. IEEE 2 nd International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 136-139, Jun. 3-5, 2019.
- A. Raghunathan, J. Steinhardt, and P. Liang, "Certified Defenses against Adversarial Examples," in Proc. ICLR 2018, Apr. 30-May 3, 2018.
- A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, "Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks," in Proc. NIPS 2018, Dec. 3-8, 2018.
- E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, "How To Backdoor Federated Learning," in Proc. 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Apr. 13- 15, 2021.
- D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, "Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates," in Proc. ICML 2018, Jul 10-15, 2018.
- L. Zhu, Z. Liu, and S. Han, "Deep Leakage from Gradients," in Proc. NeurIPS 2019, Dec. 8-14, 2019.