TY - GEN
T1 - FilterFL
T2 - 32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
AU - Yang, Yanxin
AU - Hu, Ming
AU - Xie, Xiaofei
AU - Cao, Yue
AU - Zhang, Pengyu
AU - Huang, Yihao
AU - Chen, Mingsong
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/22
Y1 - 2025/11/22
N2 - Due to the lack of data auditing techniques for untrusted clients, Federated Learning (FL) is vulnerable to backdoor attacks. Although various methods have been proposed to protect FL against backdoor attacks, they still exhibit poor defense performance in extreme data heterogeneity scenarios. Worse still, these methods strongly rely on additional datasets, violating the privacy protection requirements of FL. To overcome the above shortcomings, this paper proposes a novel data-free backdoor defense approach for FL, named FilterFL, which strives to prevent uploaded client models with backdoor knowledge from participating in the aggregation operation in each FL communication round. Based on our knowledge extraction and backdoor filtering schemes using two well-designed Conditional Generative Adversarial Networks (CGANs), FilterFL extracts incremental knowledge learned by a newly updated global model and filters its backdoor components, which can be used to generate one sample that reflects backdoor knowledge for each category. If an uploaded local model can confidently classify a generated sample into its target category, the knowledge contributed by the model will be excluded from the aggregation. In this way, FilterFL can effectively defend against backdoor attacks without using any additional auxiliary data. Comprehensive experiments on well-known datasets demonstrate that, compared with state-of-the-art methods, our approach achieves the best defense performance within various data heterogeneity scenarios.
AB - Due to the lack of data auditing techniques for untrusted clients, Federated Learning (FL) is vulnerable to backdoor attacks. Although various methods have been proposed to protect FL against backdoor attacks, they still exhibit poor defense performance in extreme data heterogeneity scenarios. Worse still, these methods strongly rely on additional datasets, violating the privacy protection requirements of FL. To overcome the above shortcomings, this paper proposes a novel data-free backdoor defense approach for FL, named FilterFL, which strives to prevent uploaded client models with backdoor knowledge from participating in the aggregation operation in each FL communication round. Based on our knowledge extraction and backdoor filtering schemes using two well-designed Conditional Generative Adversarial Networks (CGANs), FilterFL extracts incremental knowledge learned by a newly updated global model and filters its backdoor components, which can be used to generate one sample that reflects backdoor knowledge for each category. If an uploaded local model can confidently classify a generated sample into its target category, the knowledge contributed by the model will be excluded from the aggregation. In this way, FilterFL can effectively defend against backdoor attacks without using any additional auxiliary data. Comprehensive experiments on well-known datasets demonstrate that, compared with state-of-the-art methods, our approach achieves the best defense performance within various data heterogeneity scenarios.
KW - backdoor defense
KW - conditional generative adversarial network
KW - data-free
KW - Federated learning
KW - knowledge filtering
UR - https://www.scopus.com/pages/publications/105023836485
U2 - 10.1145/3719027.3744883
DO - 10.1145/3719027.3744883
M3 - 会议稿件
AN - SCOPUS:105023836485
T3 - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
SP - 3147
EP - 3161
BT - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
Y2 - 13 October 2025 through 17 October 2025
ER -