TY - GEN
T1 - Boosting Gradient Leakage Attacks
T2 - 34th USENIX Security Symposium, USENIX Security 2025
AU - Fan, Mingyuan
AU - Wang, Fuyi
AU - Chen, Cen
AU - Zhou, Jianying
N1 - Publisher Copyright:
© 2025 by The USENIX Association All Rights Reserved.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) enables collaborative model training among multiple clients without the need to expose raw data. Its ability to safeguard privacy, at the heart of FL, has recently been a hot-button debate topic. To elaborate, several studies have introduced a type of attacks known as gradient leakage attacks (GLAs), which exploit the gradients shared during training to reconstruct clients’ raw data. On the flip side, some literature, however, contends no substantial privacy risk in practical FL environments due to the effectiveness of such GLAs being limited to overly relaxed conditions, such as small batch sizes and knowledge of clients’ data distributions. This paper bridges this critical gap by empirically demonstrating that clients’ data can still be effectively reconstructed, even within realistic FL environments. Upon revisiting GLAs, we recognize that their performance failures stem from their inability to handle the gradient matching problem. To alleviate the performance bottlenecks identified above, we develop FEDLEAK, which introduces two novel techniques, partial gradient matching and gradient regularization. Moreover, to evaluate the performance of FEDLEAK in real-world FL environments, we formulate a practical evaluation protocol grounded in a thorough review of extensive FL literature and industry practices. Under this protocol, FEDLEAK can still achieve high-fidelity data reconstruction, thereby underscoring the significant vulnerability in FL systems and the urgent need for more effective defense methods.
AB - Federated learning (FL) enables collaborative model training among multiple clients without the need to expose raw data. Its ability to safeguard privacy, at the heart of FL, has recently been a hot-button debate topic. To elaborate, several studies have introduced a type of attacks known as gradient leakage attacks (GLAs), which exploit the gradients shared during training to reconstruct clients’ raw data. On the flip side, some literature, however, contends no substantial privacy risk in practical FL environments due to the effectiveness of such GLAs being limited to overly relaxed conditions, such as small batch sizes and knowledge of clients’ data distributions. This paper bridges this critical gap by empirically demonstrating that clients’ data can still be effectively reconstructed, even within realistic FL environments. Upon revisiting GLAs, we recognize that their performance failures stem from their inability to handle the gradient matching problem. To alleviate the performance bottlenecks identified above, we develop FEDLEAK, which introduces two novel techniques, partial gradient matching and gradient regularization. Moreover, to evaluate the performance of FEDLEAK in real-world FL environments, we formulate a practical evaluation protocol grounded in a thorough review of extensive FL literature and industry practices. Under this protocol, FEDLEAK can still achieve high-fidelity data reconstruction, thereby underscoring the significant vulnerability in FL systems and the urgent need for more effective defense methods.
UR - https://www.scopus.com/pages/publications/105021371016
M3 - 会议稿件
AN - SCOPUS:105021371016
T3 - Proceedings of the 34th USENIX Security Symposium
SP - 2985
EP - 3004
BT - Proceedings of the 34th USENIX Security Symposium
PB - USENIX Association
Y2 - 13 August 2025 through 15 August 2025
ER -