DP-GSGLD: A Bayesian optimizer inspired by differential privacy defending against privacy leakage in federated learning

Chengyi Yang, Kun Jia, Deli Kong, Jiayin Qi, Aimin Zhou

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Stochastic Gradient Langevin Dynamics (SGLD) is believed to preserve differential privacy as its intrinsic attribute since it obtain randomness from posterior sampling and natural noise. In this paper, we propose Differentially Private General Stochastic Gradient Langevin Dynamics (DP-GSGLD), a novel variant of SGLD which realizes gradient estimation in parameter updating through Bayesian sampling. We introduce the technique of parameter clipping and prove that DP-GSGLD satisfies the property of Differential Privacy (DP). We conduct experiments on several image datasets for defending against gradient attack that is commonly appeared in the scenario of federated learning. The results demonstrate that DP-GSGLD can decrease the time for model training and achieve higher accuracy under the same privacy level.

Original languageEnglish
Article number103839
JournalComputers and Security
Volume142
DOIs
StatePublished - Jul 2024

Keywords

  • Bayesian learning
  • Deep learning optimizer
  • Differential privacy
  • Stochastic gradient Langevin dynamics

Fingerprint

Dive into the research topics of 'DP-GSGLD: A Bayesian optimizer inspired by differential privacy defending against privacy leakage in federated learning'. Together they form a unique fingerprint.

Cite this