TY - JOUR
T1 - Dual-Server Privacy-Preserving Collaborative Deep Learning
T2 - A Round-Efficient, Dynamic and Lossless Approach
AU - Wang, Lulu
AU - Zhang, Lei
AU - Choo, Kim Kwang Raymond
AU - Domingo-Ferrer, Josep
AU - Conti, Mauro
AU - Gao, Yuanyuan
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - To address limitations in existing privacy-preserving collaborative deep learning (CDL) schemes, we propose a dualserver privacy-preserving CDL scheme based on homomorphic encryption and a masking technique. Specifically, in our scheme a random seed is used to initialize a pseudorandom generator that produces multiple pseudorandom numbers. These pseudorandom numbers, along with a random noise, are utilized to generate masks that are added to all parameters of a participant’s locally trained model. By using homomorphic encryption, the random noise can be encrypted and eventually used to remove the masks with low message expansion. This also ensures that the global model is lossless in accuracy. Furthermore, if participants join or leave the system, only the time required to complete both model update aggregation and encrypted masks aggregation is affected. We demonstrate that our scheme is round-efficient, dynamic and lossless. We also show that it is secure against inference attacks and can resist collusion attacks of up to t − 2 participants and one of the two servers, where t is a security parameter indicating the minimum number of participants that participate in an aggregation round.
AB - To address limitations in existing privacy-preserving collaborative deep learning (CDL) schemes, we propose a dualserver privacy-preserving CDL scheme based on homomorphic encryption and a masking technique. Specifically, in our scheme a random seed is used to initialize a pseudorandom generator that produces multiple pseudorandom numbers. These pseudorandom numbers, along with a random noise, are utilized to generate masks that are added to all parameters of a participant’s locally trained model. By using homomorphic encryption, the random noise can be encrypted and eventually used to remove the masks with low message expansion. This also ensures that the global model is lossless in accuracy. Furthermore, if participants join or leave the system, only the time required to complete both model update aggregation and encrypted masks aggregation is affected. We demonstrate that our scheme is round-efficient, dynamic and lossless. We also show that it is secure against inference attacks and can resist collusion attacks of up to t − 2 participants and one of the two servers, where t is a security parameter indicating the minimum number of participants that participate in an aggregation round.
KW - Deep learning
KW - collaborative deep learning
KW - federated learning
KW - homomorphic encryption
KW - privacy
UR - https://www.scopus.com/pages/publications/105013653808
U2 - 10.1109/TDSC.2025.3599911
DO - 10.1109/TDSC.2025.3599911
M3 - 文章
AN - SCOPUS:105013653808
SN - 1545-5971
VL - 22
SP - 7759
EP - 7772
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 6
ER -