Browse > Article
http://dx.doi.org/10.14400/JDC.2019.17.7.285

Analysis of privacy issues and countermeasures in neural network learning  

Hong, Eun-Ju (Dept. of Convergence Science, Kongju National University)
Lee, Su-Jin (Dept. of Mathematics, Kongju National University)
Hong, Do-won (Dept. of Applied Mathematics, Kongju National University)
Seo, Chang-Ho (Dept. of Applied Mathematics, Kongju National University)
Publication Information
Journal of Digital Convergence / v.17, no.7, 2019 , pp. 285-292 More about this Journal
Abstract
With the popularization of PC, SNS and IoT, a lot of data is generated and the amount is increasing exponentially. Artificial neural network learning is a topic that attracts attention in many fields in recent years by using huge amounts of data. Artificial neural network learning has shown tremendous potential in speech recognition and image recognition, and is widely applied to a variety of complex areas such as medical diagnosis, artificial intelligence games, and face recognition. The results of artificial neural networks are accurate enough to surpass real human beings. Despite these many advantages, privacy problems still exist in artificial neural network learning. Learning data for artificial neural network learning includes various information including personal sensitive information, so that privacy can be exposed due to malicious attackers. There is a privacy risk that occurs when an attacker interferes with learning and degrades learning or attacks a model that has completed learning. In this paper, we analyze the attack method of the recently proposed neural network model and its privacy protection method.
Keywords
Artificial Neural Network; Privacy; Differential Privacy; Homomorphic Encryption; attack;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. Abadi et al. (2016). Deep Learning with Differential Privacy. Proc. ACM CCS, (pp. 308-318). ACM : Vienna
2 G. Acs, L. Melis, C. Castelluccia & E. De Cristofaro. (2017). Differentially private mixture of generative neural networks. IEEE Transactions on Knowledge and Data Engineering, 31(6), 1109-1121.   DOI
3 C. Dwork & G. N. Rothblum. (2016). Concentrated differential privacy. CoRR, abs/1603.01887.
4 L. Yu, L. Liu, C. Pu, M. E. Gursoy & S. Truex. (2019). Differentially Private Model Publishing for Deep Learning. IEEE.
5 X. Zhang, S. Ji, H. Wang & T. Wang (2017). Private, Yet Practical, Multiparty Deep Learning. ICDCS, pp. 1442-52. IEEE.
6 K. Bonawitz et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. Cryptology ePrint Archive, (pp. 1175-1191). ACM.
7 M. Ribeiro, K. Grolinger & M. A. M. Capretz. (2015). MLaaS: Machine Learning as a Service. In IEEE International Conference on Machine Learning and Applications (ICMLA), p. 896-902.
8 M. Fredrikson, S. Jha & T. Ristenpart. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In CCS, (pp. 1322-1333). USA : ACM.
9 A. Krizhevsky, I. Sutskever & G. E Hinton. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
10 S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation 9(8), 1735-1780.   DOI
11 H. Bae, J. Jang, D. Jung, H. Jang, H. Ha & S. Yoon. (2018). Security and Privacy Issues in Deep Learning. ACM Computing Surveys
12 B Hitaj, G Ateniese & F Perez-Cruz. (2017). Deep Models under the GAN: Information Leakage from Collaborative Deep Learning. Proc. (pp. 603-618). ACM CCS.
13 C. Dwork & A. Roth. (2013). The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4), 211-407.
14 C. Gentry. (2009). A fully homomorphic encryption scheme. PhD thesis, Stanford University, California
15 P Martins, L Sousa & A Mariano. (2018). A survey on fully homomorphic encryption: An engineering perspective. ACM Computing Surveys (CSUR), 50(6), 83.
16 Y. Lindell & B. Pinkas. (2008). Secure multiparty computation for privacy-preserving data mining. IACR Cryptology ePrint Archive 197.
17 P. Mohassel & Y. Zhang. (2017). SecureML: A System for Scalable Privacy preserving Machine Learning. IEEE Sym. SP, p. 19-38.
18 S. Chang & C. Li. (2018). Privacy in Neural Network Learning: Threats and Countermeasures. IEEE Network, 32(4), 61-67.   DOI
19 R. Shokri, M. Stronati, C. Song & V. Shmatikov. (2017). Membership Inference Attacks against Machine Learning Models. IEEE Sym. SP, p. 3-18.
20 F. Tramer, F. Zhang, A. Juels, M.K. Reiter & T. Ristenpart. (2016). Stealing Machine Learning Models via Prediction APIs. USENIX Sec. Sym. (pp. 601-618). Vancouver : USENIX
21 L. Xie, K. Lin, S. Wang, F. Wang & J. Zhou. (2018). Differentially Private Generative Adversarial Network. arXiv preprint arXiv:1802.06739.
22 K. Ligett. (2017). Introduction to differential privacy, randomized response, basic properties. The 7th BIU Winter School on Cryptography, BIU.
23 J. Yuan & S. Yu. (2014). Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing. IEEE Trans. PDS, p. 212-221.
24 P. Li et al. (2017). Multi-Key Privacy-Preserving Deep Learning in Cloud Computing. Future Generation Computer Systems, 74, 76-85.   DOI