1 |
"Poison attacks against machine learning, Security and spam-detection programs could be affected", The Kurzweil Accelerating Intelligence , July, 2012
|
2 |
Mozaffari-Kermani, Mehran, et al. "Systematic poisoning attacks on and defenses for machine learning in healthcare." IEEE journal of biomedical and health informatics, 19.6, 1893-1905, 2015
DOI
|
3 |
Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv, 1312.6199, 2013.
|
4 |
T. Vaidya, Y. Zhang, M. Sherr, and C. Shields, "Cocaine noodles:exploiting the gap between human and machine speech recognition," in 9th USENIX Workshop on Offensive Technologies (WOOT 15), 2015
|
5 |
Tramèr, Florian, et al. "Stealing machine learning models via prediction apis." USENIX Security. 2016.
|
6 |
Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. "Model inversion attacks that exploit confidence information and basic countermeasures." Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015.
|
7 |
https://en.wikipedia.org/wiki/Sanitization_(classi fied_information)
|