TY - JOUR
T1 - Verifiable Private Federated Learning Achieving Low-Communication with CUR Decomposition
AU - Wu, Changti
AU - Wang, Lulu
AU - Zhang, Lei
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2026
Y1 - 2026
N2 - Federated learning (FL) allows multiple clients to collaboratively train a shared machine learning model without sharing local data. Despite its advantages, FL faces serious security and privacy threats. Many existing solutions rely on cryptographic methods to protect data and ensure verifiability, but these approaches often enlarge the model or impose high communication costs. They also overlook FL's limited uplink and downlink bandwidth and rarely account for practical issues such as client dropouts. To address these gaps, we propose LC-VPFL, a federated learning framework that ensures data privacy, verifiability, low communication overhead, and dropout tolerance. Our approach leverages secret sharing and masking to protect data privacy, while homomorphic hashing detects malicious server behavior. To minimize communication costs, we apply quantization and CUR matrix decomposition, optimizing both uplink and downlink transmissions. We formally prove the security of LC-VPFL and provide a theoretical analysis demonstrating that, for a corruption threshold of t, the communication complexity of partial clients remains O(t) and tolerates arbitrary client dropouts. Experimental results show that LC-VPFL reduces uplink communication costs by over 50% in most scenarios and downlink communication costs to less than 12.5% of those in FedAvg, with an accuracy loss within 3%.
AB - Federated learning (FL) allows multiple clients to collaboratively train a shared machine learning model without sharing local data. Despite its advantages, FL faces serious security and privacy threats. Many existing solutions rely on cryptographic methods to protect data and ensure verifiability, but these approaches often enlarge the model or impose high communication costs. They also overlook FL's limited uplink and downlink bandwidth and rarely account for practical issues such as client dropouts. To address these gaps, we propose LC-VPFL, a federated learning framework that ensures data privacy, verifiability, low communication overhead, and dropout tolerance. Our approach leverages secret sharing and masking to protect data privacy, while homomorphic hashing detects malicious server behavior. To minimize communication costs, we apply quantization and CUR matrix decomposition, optimizing both uplink and downlink transmissions. We formally prove the security of LC-VPFL and provide a theoretical analysis demonstrating that, for a corruption threshold of t, the communication complexity of partial clients remains O(t) and tolerates arbitrary client dropouts. Experimental results show that LC-VPFL reduces uplink communication costs by over 50% in most scenarios and downlink communication costs to less than 12.5% of those in FedAvg, with an accuracy loss within 3%.
KW - dropout tolerance
KW - Federated learning
KW - low-communication
KW - privacy preserving
KW - verifiability
UR - https://www.scopus.com/pages/publications/105027775165
U2 - 10.1109/TDSC.2026.3653026
DO - 10.1109/TDSC.2026.3653026
M3 - 文章
AN - SCOPUS:105027775165
SN - 1545-5971
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
ER -