• Title/Summary/Keyword: Adaptive Differential Privacy

Search Result 3, Processing Time 0.016 seconds

Adaptive Gaussian Mechanism Based on Expected Data Utility under Conditional Filtering Noise

  • Liu, Hai;Wu, Zhenqiang;Peng, Changgen;Tian, Feng;Lu, Laifeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3497-3515
    • /
    • 2018
  • Differential privacy has broadly applied to statistical analysis, and its mainly objective is to ensure the tradeoff between the utility of noise data and the privacy preserving of individual's sensitive information. However, an individual could not achieve expected data utility under differential privacy mechanisms, since the adding noise is random. To this end, we proposed an adaptive Gaussian mechanism based on expected data utility under conditional filtering noise. Firstly, this paper made conditional filtering for Gaussian mechanism noise. Secondly, we defined the expected data utility according to the absolute value of relative error. Finally, we presented an adaptive Gaussian mechanism by combining expected data utility with conditional filtering noise. Through comparative analysis, the adaptive Gaussian mechanism satisfies differential privacy and achieves expected data utility for giving any privacy budget. Furthermore, our scheme is easy extend to engineering implementation.

Study on Evaluation Method of Task-Specific Adaptive Differential Privacy Mechanism in Federated Learning Environment (연합 학습 환경에서의 Task-Specific Adaptive Differential Privacy 메커니즘 평가 방안 연구)

  • Assem Utaliyeva;Yoon-Ho Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.143-156
    • /
    • 2024
  • Federated Learning (FL) has emerged as a potent methodology for decentralized model training across multiple collaborators, eliminating the need for data sharing. Although FL is lauded for its capacity to preserve data privacy, it is not impervious to various types of privacy attacks. Differential Privacy (DP), recognized as the golden standard in privacy-preservation techniques, is widely employed to counteract these vulnerabilities. This paper makes a specific contribution by applying an existing, task-specific adaptive DP mechanism to the FL environment. Our comprehensive analysis evaluates the impact of this mechanism on the performance of a shared global model, with particular attention to varying data distribution and partitioning schemes. This study deepens the understanding of the complex interplay between privacy and utility in FL, providing a validated methodology for securing data without compromising performance.

Research on Federated Learning with Differential Privacy (차분 프라이버시를 적용한 연합학습 연구)

  • Jueun Lee;YoungSeo Kim;SuBin Lee;Ho Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.749-752
    • /
    • 2024
  • 연합학습은 클라이언트가 중앙 서버에 원본 데이터를 주지 않고도 학습할 수 있도록 설계된 분산된 머신러닝 방법이다. 그러나 클라이언트와 중앙 서버 사이에 모델 업데이트 정보를 공유한다는 점에서 여전히 추론 공격(Inference Attack)과 오염 공격(Poisoning Attack)의 위험에 노출되어 있다. 이러한 공격을 방어하기 위해 연합학습에 차분프라이버시(Differential Privacy)를 적용하는 방안이 연구되고 있다. 차분 프라이버시는 데이터에 노이즈를 추가하여 민감한 정보를 보호하면서도 유의미한 통계적 정보 쿼리는 공유할 수 있도록 하는 기법으로, 노이즈를 추가하는 위치에 따라 전역적 차분프라이버시(Global Differential Privacy)와 국소적 차분 프라이버시(Local Differential Privacy)로 나뉜다. 이에 본 논문에서는 차분 프라이버시를 적용한 연합학습의 최신 연구 동향을 전역적 차분 프라이버시를 적용한 방향과 국소적 차분 프라이버시를 적용한 방향으로 나누어 검토한다. 또한 이를 세분화하여 차분 프라이버시를 발전시킨 방식인 적응형 차분 프라이버시(Adaptive Differential Privacy)와 개인화된 차분 프라이버시(Personalized Differential Privacy)를 응용하여 연합학습에 적용한 방식들에 대하여 특징과 장점 및 한계점을 분석하고 향후 연구방향을 제안한다.