초록
We introduce a novel method to defend against model inversion attacks in Federated Learning (FL). FL enables the training of a global model by sharing local gradients without sharing clients' private data. However, model inversion attacks can reconstruct the data from the shared gradients. Traditional defense mechanisms, such as Differential Privacy (DP) and Homomorphic Encryption (HE), have limitations in balancing privacy and model accuracy. Our approach selectively encrypts more important gradients, which contain more information about the training data, to balance between privacy and computational efficiency. Additionally, optional DP noise is applied to unencrypted gradients for enhanced security. Comprehensive evaluations demonstrate that our method significantly improves both privacy and model accuracy compared to existing defenses.