Federated Learning Vulnerabilities: Privacy Attacks with Denoising Diffusion Probabilistic Models

Hongyan Gu, Xinyi Zhang, Jiang Li, Hui Wei, Baiqi Li, Xinli Huang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Scopus citations

Abstract

Federal Learning (FL) is highly respected for protecting data privacy in a distributed environment. However, the correlation between the updated gradient and the training data opens up the possibility of data reconstruction for malicious attackers, thus threatening the basic privacy requirements of FL. Previous research on such attacks mainly focuses on two main perspectives: one exclusively relies on gradient attacks, which performs well on small-scale data but falter with large-scale data; the other incorporates images prior but faces practical implementation challenges. So far, the effectiveness of privacy leakage attacks in FL is still far from satisfactory. In this paper, we introduce the Gradient Guided Diffusion Model (GGDM), a novel learning-free approach based on a pre-trained unconditional Denoising Diffusion Probabilistic Models (DDPM), aimed at improving the effectiveness and reducing the difficulty of implementing gradient based privacy attacks on complex networks and high-resolution images. To the best of our knowledge, this is the first work to employ the DDPM for privacy leakage attacks of FL. GGDM capitalizes on the unique nature of gradients and guides DDPM to ensure that reconstructed images closely mirror the original data. In addition, in GGDM, we elegantly combine the gradient similarity function with the Stochastic Differential Equation (SDE) to guide the DDPM sampling process based on theoretical analysis, and further reveal the impact of common similarity functions on data reconstruction. Extensive evaluation results demonstrate the excellent generalization ability of GGDM. Specifically, compared with state-of-the-art methods, GGDM shows clear superiority in both quantitative metrics and visualization, significantly enhancing the reconstruction quality of privacy attacks.

Original languageEnglish
Title of host publicationWWW 2024 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Pages1149-1157
Number of pages9
ISBN (Electronic)9798400701719
DOIs
StatePublished - 13 May 2024
Event33rd ACM Web Conference, WWW 2024 - Singapore, Singapore
Duration: 13 May 202417 May 2024

Publication series

NameWWW 2024 - Proceedings of the ACM Web Conference

Conference

Conference33rd ACM Web Conference, WWW 2024
Country/TerritorySingapore
CitySingapore
Period13/05/2417/05/24

Keywords

  • denoising diffusion probabilistic models
  • federated learning
  • privacy attack

Fingerprint

Dive into the research topics of 'Federated Learning Vulnerabilities: Privacy Attacks with Denoising Diffusion Probabilistic Models'. Together they form a unique fingerprint.

Cite this