BapFL: You can Backdoor Personalized Federated Learning

Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

In federated learning (FL), malicious clients could manipulate the predictions of the trained model through backdoor attacks, posing a significant threat to the security of FL systems. Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario, where all clients collaborate to train a single global model. A recent study conducted by Qin et al. [24] marks the initial exploration of backdoor attacks within the personalized federated learning (pFL) scenario, where each client constructs a personalized model based on its local data. Notably, the study demonstrates that pFL methods with parameter decoupling can significantly enhance robustness against backdoor attacks. However, in this article, we whistleblow that pFL methods with parameter decoupling are still vulnerable to backdoor attacks. The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers: (1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exacerbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum. We hope to inspire further research on attack and defense strategies in pFL scenarios. The code is available at: https://github.com/BapFL/code

Original languageEnglish
Article number166
JournalACM Transactions on Knowledge Discovery from Data
Volume18
Issue number7
DOIs
StatePublished - 19 Jun 2024

Keywords

  • Personalized federated learning
  • backdoor attack
  • model security

Fingerprint

Dive into the research topics of 'BapFL: You can Backdoor Personalized Federated Learning'. Together they form a unique fingerprint.

Cite this