참고문헌
- Ian J. Goodfellow "Explaining and Harnessing Adversarial Examples " ICLR, San Diego, 2015, 11
- Yuhng Zhang "The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks" CVPR, Seattle, 2020, 9
- Floran Tramer "Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware" ICLR, New Orleans, 2019, 19
- Stvros Volos "Graviton: Trusted Execution Environments on GPUs" OSDI, California, 2018, 17
- Bria Knott "CrypTen: Secure Multi-Party Computation Meets Machine Learning" NeurIPS, Vancouver, 2021, 23
- Moritz Lipp "Meltdown: Reading Kernel Memory from User Space" Usenix Security, Baltimore, 2018, 19