Research on Federated Learning with Differential Privacy

차분 프라이버시를 적용한 연합학습 연구

  • Jueun Lee (Dept. of Artificial Intelligence Convergence, Ewha Womans University) ;
  • YoungSeo Kim (Dept. of Artificial Intelligence Convergence, Ewha Womans University) ;
  • SuBin Lee (Dept. of Artificial Intelligence Convergence, Ewha Womans University) ;
  • Ho Bae (Dept. of Cyber Security, Ewha Womans University)
  • 이주은 (이화여자대학교 인공지능융합전공) ;
  • 김영서 (이화여자대학교 인공지능융합전공) ;
  • 이수빈 (이화여자대학교 인공지능융합전공) ;
  • 배호 (이화여자대학교 사이버보안학과)
  • Published : 2024.05.23

Abstract

연합학습은 클라이언트가 중앙 서버에 원본 데이터를 주지 않고도 학습할 수 있도록 설계된 분산된 머신러닝 방법이다. 그러나 클라이언트와 중앙 서버 사이에 모델 업데이트 정보를 공유한다는 점에서 여전히 추론 공격(Inference Attack)과 오염 공격(Poisoning Attack)의 위험에 노출되어 있다. 이러한 공격을 방어하기 위해 연합학습에 차분프라이버시(Differential Privacy)를 적용하는 방안이 연구되고 있다. 차분 프라이버시는 데이터에 노이즈를 추가하여 민감한 정보를 보호하면서도 유의미한 통계적 정보 쿼리는 공유할 수 있도록 하는 기법으로, 노이즈를 추가하는 위치에 따라 전역적 차분프라이버시(Global Differential Privacy)와 국소적 차분 프라이버시(Local Differential Privacy)로 나뉜다. 이에 본 논문에서는 차분 프라이버시를 적용한 연합학습의 최신 연구 동향을 전역적 차분 프라이버시를 적용한 방향과 국소적 차분 프라이버시를 적용한 방향으로 나누어 검토한다. 또한 이를 세분화하여 차분 프라이버시를 발전시킨 방식인 적응형 차분 프라이버시(Adaptive Differential Privacy)와 개인화된 차분 프라이버시(Personalized Differential Privacy)를 응용하여 연합학습에 적용한 방식들에 대하여 특징과 장점 및 한계점을 분석하고 향후 연구방향을 제안한다.

Keywords

Acknowledgement

이 논문은 2024년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임 (No.RS-2022-00155966, 인공지능융합혁신인재양성(이화여자대학교))

References

  1. MCMAHAN, Brendan, et al. Communication-efficient learning of deep networks from decentralized data. In:Artificial intelligence and statistics. PMLR, 2017. p. 1273-1282.
  2. HAO, Meng, et al. Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Transactions on Industrial Informatics, 2019, 16.10: 6532-6542.
  3. LYU, Lingjuan; YU, Han; YANG, Qiang. Threats to federated learning: A survey. arXivpreprint arXiv:2003.02133, 2020.
  4. ZHANG, Chen, et al. A survey on federated learning.Knowledge-Based Systems, 2021, 216: 106775.
  5. BAGDASARYAN, Eugene, et al. How to backdoor federated learning. In:International conference on artificial intelligence and statistics. PMLR, 2020. p. 2938-2948.
  6. DWORK, Cynthia. Differential privacy. In: International colloquium on automata, languages, and programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. p. 1-12.
  7. LI, Qinbin, et al. A survey on federated learning systems: Vision, hype and reality for data privacy and protection.IEEE Transactions on Knowledge and Data Engineering, 2021, 35.4: 3347-3366.
  8. WANG, Huazheng, et al. Global and local differential privacy for collaborative bandits. In:Proceedings of the 14th ACM Conference on Recommender Systems. 2020. p. 150-159.
  9. ARACHCHIGE, Pathum Chamikara Mahawaga, et al. Local differential privacy for deep learning.IEEE Internet of Things Journal, 2019, 7.7: 5827-5842.
  10. JORGENSEN, Zach; YU, Ting; CORMODE, Graham. Conservative or liberal? Personalized differential privacy. In:2015 IEEE 31St international conference on data engineering. IEEE, 2015. p. 1023-1034.
  11. PHAN, NhatHai, et al. Adaptive laplace mechanism: Differential privacy preservation in deep learning. In:2017 IEEE international conference on data mining (ICDM). IEEE, 2017. p. 385-394.
  12. GEYER, Robin C.; KLEIN, Tassilo; NABI, Moin. Differentially private federated learning: A client level perspective.arXiv preprint arXiv:1712.07557, 2017.
  13. MCMAHAN, H. Brendan, et al. Learning differentially private recurrent language models.arXiv preprint arXiv:1710.06963, 2017.
  14. TRIASTCYN, Aleksei; FALTINGS, Boi. Federated learning with bayesian differential privacy. In:2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. p. 2587-2596.
  15. WEI, Kang, et al. Federated learning with differential privacy: Algorithms and performance analysis.IEEE transactions on information forensics and security, 2020, 15: 3454-3469.
  16. ZHANG, Xinwei, et al. Understanding clipping for federated learning: Convergence and client-level differential privacy. In:International Conference on Machine Learning, ICML 2022. 2022.
  17. HU, Rui; GUO, Yuanxiong; GONG, Yanmin. Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy.IEEE Transactions on Mobile Computing, 2023.
  18. WANG, Yansheng; TONG, Yongxin; SHI, Dingyuan. Federated latent dirichlet allocation: A local differential privacy based framework. In:Proceedings of the AAAI Conference on Artificial Intelligence. 2020. p. 6283-6290.
  19. WEI, Kang, et al. User-level privacy-preserving federated learning: Analysis and performance optimization.IEEE Transactions on Mobile Computing, 2021, 21.9: 3388-3401.
  20. SHI, Lu, et al. HFL-DP: Hierarchical federated learning with differential privacy. In:2021 IEEE Global Communications Conference(GLOBECOM). IEEE, 2021. p. 1-7.
  21. ZHOU, Hao, et al. PFLF: Privacy-preserving federated learning framework for edge computing.IEEE Transactions on Information Forensics and Security, 2022, 17: 1905-1918.
  22. WU, Chuhan, et al. A federated graph neural network framework for privacy-preserving personalization.Nature Communications, 2022, 13.1:3091.
  23. SHEN, Xiaoying, et al. Pldp-fl: Federated learning with personalized local differential privacy.Entropy, 2023, 25.3: 485.
  24. DONG, Fang, et al. PADP-FedMeta: A personalized and adaptive differentially private federated meta learning mechanism for AIoT.Journal of Systems Architecture, 2023, 134:102754.