Acknowledgement
본 연구는 2023학년도 국민대학교 우수연구센터 사업비를 지원받아 수행된 연구임.
References
- Arachchige, P. C. M., Bertok, P., Khalil, I., Liu, D., Camtepe, S., & Atiquzzaman, M. (2019). Local differential privacy for deep learning. IEEE Internet of Things Journal, 7(7), 5827-5842.
- Baracaldo, N., Chen, B., Ludwig, H., & Safavi, J. A. (2017). Mitigating poisoning attacks on machine learning models: A data provenance based approach. Proceedings of the 10th ACM workshop on artificial intelligence and security, 103-110.
- Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., & Roli, F. (2013). Evasion attacks against machine learning at test time. Machine Learning and Knowledge Discovery in Databases, 387-402.
- Brown, T. B., (2018, January 23). Adversarial Patch. youtube. Retrieved September 5, 2023, from https://www.youtube.com/watch?v=i1sp4X57TL4
- Cheon, J. H., Kim, A., Kim, M., & Song, Y. (2017). Homomorphic encryption for arithmetic of approximate numbers. International Conference on the Theory and Applications of Cryptology and Information Security, 409-437.
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805, 1-16.
- Fredrikson, M., Jha, S., & Ristenpart, T. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 1322-1333.
- Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., & Wernsing, J. (2016). Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. International conference on machine learning, 201-210.
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 1-11.
- Huang, Y., Su, Y., Ravi, S., Song, Z., Arora, S., & Li, K. (2020). Privacy-preserving learning via deep net pruning. arXiv preprint arXiv:2003. 01876, 1-43.
- Lee, Y., Chen, A. S., Tajwar, F., Kumar, A., Yao, H., Liang, P., & Finn, C. (2022). Surgical fine-tuning improves adaptation to distribution shifts. arXiv preprint arXiv:2210.11466, 1-25.
- Martins, P., Sousa, L., & Mariano, A. (2017). A survey on fully homomorphic encryption: An engineering perspective. ACM Computing Surveys (CSUR), 50(6), 1-33. https://doi.org/10.1145/3124441
- McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. Artificial intelligence and statistics, PMLR, 1273-1282.
- Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. 2016 IEEE symposium on security and privacy (SP), 582-597.
- Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., & Erlingsson, U. (2018). Scalable private learning with pate. arXiv preprint arXiv: 1802.08908, 1-34.
- Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training, Preprint, 1-12.
- Sajjad, H., Dalvi, F., Durrani, N., & Nakov, P. (2023). On the effect of dropping layers of pre-trained transformer models. Computer Speech & Language, 77, 1-12.
- Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. IEEE symposium on security and privacy (SP), 3-18.
- Tian, Z., Cui, L., Liang, J., & Yu, S. (2022). A comprehensive survey on poisoning attacks and countermeasures in machine learning. ACM Computing Surveys, 55(8), 1-35.
- Wang, T., & Liu, L. (2011). Output privacy in data mining. ACM Transactions on Database Systems (TODS), 36(1), 1-34.
- Xiao, G., Lin, J., & Han, S. (2023). Offsite-tuning: Transfer learning without full model. arXiv preprint arXiv:2302.04870, 1-12.
- Xu, R., Baracaldo, N., & Joshi, J. (2021). Privacy-preserving machine learning: Methods, challenges and directions. arXiv preprint arXiv:2108.04417, 1-40.