Browse > Article
http://dx.doi.org/10.13089/JKIISC.2019.29.3.589

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments  

Park, Cheollhee (Kongju National University)
Hong, Dowon (Kongju National University)
Abstract
The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.
Keywords
Differential privacy; model inversion attack; privacy-preserving neural network;
Citations & Related Records
연도 인용수 순위
  • Reference
1 K. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, Dec. 2015.
2 O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton, "Grammar as a foreign language," In Advances in neural information processing systems, pp. 2773-2781, Dec, 2015.
3 C. J. Maddison, A. Huang, I. Sutskever, and D. Silver, "Move evaluation in Go using deep convolutional neural networks," arXiv preprint arXiv:1412.6564, 2014.
4 M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, "Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing," In 23rd USENIX Security Symposium, pp. 17-32, Aug. 2014.
5 M. Fredrikson, S. Jha, and T. Ristenpart, "Model inversion attacks that exploit confidence information and basic countermeasures," In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322-1333, Oct, 2015.
6 R. Shokri, M. Stronati, C. Song, and V. Shmatikov, "Membership inference attacks against machine learning models," In 2017 IEEE Symposium on Security and Privacy, pp. 3-18, May, 2017.
7 F. Tramer, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing machine learning models via prediction apis," In 25th USENIX Security Symposium, pp. 601-618, Jun, 2016.
8 C. Dwork, F. McSherry, K. Nissim, and A. Smith, "Calibrating noise to sensitivity in private data analysis," In Theory of cryptography conference, pp. 265-284. March, 2006.
9 C. Dwork and A. Roth, "The algorithmic foundations of differential privacy," Foundations and Trends(R) in Theoretical Computer Science, 9(3-4), 211-407. 2014.
10 M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, "Deep learning with differential privacy," In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318, 2016, October.
11 C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, "Our data, ourselves: Privacy via distributed noise generation," In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 486-503, May, 2006.
12 C. Dwork and J. Lei, "Differential privacy and robust statistics," In STOC, Vol. 9, pp. 371-380, May, 2009.
13 C. Dwork, G. N. Rothblum, and S. Vadhan, "Boosting and differential privacy," In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp. 51-60, October, 2010.
14 P. Kairouz, S. Oh, and P. Viswanath, "The composition theorem for differential privacy," IEEE Transactions on Information Theory, 63(6), 4037-4049, 2017.   DOI
15 A. Beimel, S. P. Kasiviswanathan, and K. Nissim, "Bounds on the sample complexity for private learning and private data release," In Theory of Cryptography Conference, pp. 437-454, February, 2010.
16 AT&T Laboratories Cambridge. The ORL database of faces. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html. 2019. 01.