TY - GEN
T1 - Twice the Gradient, Twice the Privacy Risk in Federated Learning? A Case Study of Federated Recommendation Systems
AU - Deng, Zhenyu
AU - Liu, Ying
AU - Tang, Ming
AU - Zhao, Xiangyu
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning mitigates data leakage risks while maintaining training efficiency via gradient sharing. Nonetheless, previous studies have demonstrated persistent privacy vulnerabilities because attackers can reconstruct training data from shared gradients. Existing reconstruction methods assume attackers can access all model parameters; however, sensitive parameters (such as user embeddings in federated recommendation systems) often remain private. Limited access results in inaccurate reconstructions. Using federated recommendation systems as a case study, we identify insufficient attack constraints as the root cause of reconstruction failures. To address this limitation, we propose the MGradInv method, which leverages gradients from multiple training steps as additional reconstruction constraints. The experimental results demonstrate that this approach prevents convergence to local optima and reduces reconstruction errors by establishing sufficient constraints. We investigated two key factors affecting MGradInv's performance: target model convergence and gradient intervals. Results indicate that attacks are most effective during the early training stages but deteriorate as the model converges. MGradInv is clearly effective even with gradient intervals of up to 230 steps. Our code and data are available here.
AB - Federated learning mitigates data leakage risks while maintaining training efficiency via gradient sharing. Nonetheless, previous studies have demonstrated persistent privacy vulnerabilities because attackers can reconstruct training data from shared gradients. Existing reconstruction methods assume attackers can access all model parameters; however, sensitive parameters (such as user embeddings in federated recommendation systems) often remain private. Limited access results in inaccurate reconstructions. Using federated recommendation systems as a case study, we identify insufficient attack constraints as the root cause of reconstruction failures. To address this limitation, we propose the MGradInv method, which leverages gradients from multiple training steps as additional reconstruction constraints. The experimental results demonstrate that this approach prevents convergence to local optima and reduces reconstruction errors by establishing sufficient constraints. We investigated two key factors affecting MGradInv's performance: target model convergence and gradient intervals. Results indicate that attacks are most effective during the early training stages but deteriorate as the model converges. MGradInv is clearly effective even with gradient intervals of up to 230 steps. Our code and data are available here.
KW - Federated learning
KW - Recommendation systems
KW - Trustworthy machine learning
UR - https://www.scopus.com/pages/publications/105023976484
U2 - 10.1109/IJCNN64981.2025.11227362
DO - 10.1109/IJCNN64981.2025.11227362
M3 - 会议稿件
AN - SCOPUS:105023976484
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - International Joint Conference on Neural Networks, IJCNN 2025 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 International Joint Conference on Neural Networks, IJCNN 2025
Y2 - 30 June 2025 through 5 July 2025
ER -