Browse > Article

기계학습 모델 공격연구 동향: 심층신경망을 중심으로  

Lee, Seulgi (한국인터넷진흥원 보안위협대응R&D팀)
Kim, KyeongHan (한국인터넷진흥원 보안위협대응R&D팀)
Kim, Byungik (한국인터넷진흥원 보안위협대응R&D팀)
Park, SoonTai (한국인터넷진흥원 보안위협대응R&D팀)
Keywords
Citations & Related Records
연도 인용수 순위
  • Reference
1 LeCun Y., Bengio Y., Hinton G., "Deep learning", nature, 521(7553), pp. 436-444, May 2015.   DOI
2 김용준, 김영식, "딥 러닝 기술에서의 적대적 학습 기술 동향", 정보과학회지, 36(2), pp. 9-13, 2018.
3 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R, "Intriguing properties of neural networks", arXiv:1312.6199, 2013.
4 Goodfellow IJ, Shlens J, Szegedy C, "Explaining and harnessing adversarial examples", arXiv preprint arXiv:1412.6572, 2014.
5 Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, "Mastering the game of Go with deep neural networks and tree search", nature, 529(7587), pp.484-489, 2016.   DOI
6 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A, "The limitations of deep learning in adversarial settings", 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp.372-387, 2016.
7 Biggio B, Roli F, "Wild patterns: Ten years after the rise of adversarial machine learning", Pattern Recognition, 84, pp.317-331, 2018.   DOI
8 Moosavi-Dezfooli SM, Fawzi A, Frossard P, "Deepfool: a simple and accurate method to fool deep neural networks", In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.2574-2582, 2016.
9 Papernot N, McDaniel P, Wu X, Jha S, Swami A, "Distillation as a defense to adversarial perturbations against deep neural networks", 2016 IEEE Symposium on Security and Privacy (SP), pp.582-597, 2016.
10 Shokri R, Stronati M, Song C, Shmatikov V, "Membership inference attacks against machine learning models", 2017 IEEE Symposium on Security and Privacy (SP), pp.3-18, 2017.
11 Kurakin A, Goodfellow I, Bengio S, "Adversarial examples in the physical world", arXiv preprint arXiv:1607.02533, 2016.
12 Carlini N, Wagner D, "Towards evaluating the robustness of neural networks", 2017 IEEE Symposium on Security and Privacy (SP), pp.39-57, 2017.
13 Elsayed G, Goodfellow I, Sohl-Dickstein J, "Adversarial reprogramming of neural networks", arXiv:1806.11146, 2018.
14 Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T, "Stealing machine learning models via prediction apis", 25th {USENIX} Security Symposium ({USENIX} Security 16), pp.601-618, 2016.
15 Fredrikson M, Jha S, Ristenpart T, "Model inversion attacks that exploit confidence information and basic countermeasures", In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp.1322-1333, 2015.
16 Liu Y, Ma S, Aafer Y, Lee WC, Zhai J, Wang W, Zhang X, "Trojaning attack on neural networks", 2017.
17 Gu T, Dolan-Gavitt B, Garg S, "Badnets: Identifying vulnerabilities in the machine learning model supply chain", arXiv preprint arXiv:1708.06733, 2017.
18 Liu K, Dolan-Gavitt B, Garg S, "Fine-pruning: Defending against backdooring attacks on deep neural networks", In International Symposium on Research in Attacks, Intrusions, and Defenses, pp.273-294, Springer, Cham, 2018.