Acknowledgement
이 논문은 2022년도 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원(No. 2020R1F1A1061107)과 2022년도 정부(산업통상자원부)의 재원으로 한국산업기술진흥원의 지원(P0008703, 2022년 산업혁신인재성장지원사업), 과학기술정보통신부 및 정보통신기획평가원의 ICT혁신인재4.0 사업의 연구결과로 수행되었음 (IITP-2022-RS-2022-00156310).
References
- D. Lee, "Security threat verification and defense techniques in artificial intelligence for image processing(Masters thesis)", Chungnam National University, 2021.
- I. J. Goodfellow, J. Shlens, C. Szegedy, "Explaining and Harnessing Adversarial Examples", International Conference on Learning Representations(ICLR), 2015..
- K. Eykholt,, I. Evtimov, E. Fernandes, "Robust Physical-World Attacks on Deep Learning Visual Classification.", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1625-1634, 2018.
- H. C. Kwon, S. J. Lee, J. Y. Choi, B. H. Chung, S. W. Lee, J. C. Nah, "Security Trends for Autonomous Driving Vehicle", Electronics and Telecommunications Trends, Vol. 33, No. 1, pp. 78-88, 2018.
- H. Chacon, S. Silva and P. Rad, "Deep Learning Poison Data Attack Detection", IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), pp. 971-978, 2019. doi: 10.1109/ICTAI.2019.00137.
- P. Neehar, G. Neal and Huang, W. Ronny, F. Liam, Z., Chen, F. Soheil, G. Tom, D. John "Deep k-NN Defense Against Clean-Label Data Poisoning Attacks," Computer Vision - ECCV 2020 Workshops: Glasgow, UK, August 23-28, 2020, Proceedings, Part I, vol 12535. 2020. https://doi.org/10.1007/978-3-030-66415-2_4.
- A. Shafahi, W. R Huang., M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, "Poison frogs! targeted clean-label poisoning attacks on neural networks", Advances in neural information processing systems, Vol. 31, 2018.
- C. Zhu, W.R. Huang, H. Li, G. Taylor, C. Studer, T. Goldstein, "Transferable clean-label poisoning attacks on deep neural nets," International Conference on Machine Learning. pp. 7614-7623, 2019.
- E. N. R. Ko, and J. S. Moon, "Adversarial Example Detection and Classification Model Based on the Class Predicted by Deep Learning Model.", Journal of the Korea Institute of Information Security & Cryptology, Vol. 31, No. 6, pp. 1227-1236, 2021.
- W. Xu, D. Evans, Y. Qi, "Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks," Symposium on Network and Distributed Systems Security, 2018.
- C. Shorten, T. M. Khoshgoftaar, "A survey on Image Data Augmentation for Deep Learning",. J Big Data Vol. 6, No. 60. 2019. https://doi.org/10.1186/s40537-019-0197-0.
- Dong bin Na, 'Poison-Frogs-OneShotKillAttack-PyTorch', https://github.com/ndb796/Poison-Frogs-OneShotKillAttack-PyTorch, 검색일: 2022. 8. 18.
- Dong bin Na, 'simple_dog_and_cat_dataset', https://github.com/ndb796/Poison-Frogs-OneShotKillAttack-PyTorch/tree/main/simple_dog_and_cat_dataset, 검색일: 2022. 8. 18.