TY - GEN
T1 - Secure Mutual Learning with Low Interactions for Deep Model Training
AU - Zhu, Wenxing
AU - Li, Xiangxue
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The paper proposes SMuLe, a secure mutual learning protocol for two-party deep model training, to support low demand for interaction complexity and communication overhead. The strategy is that two parties exchange their blind predictions securely on each other's dataset and the underlying models can thereby benefit from not only the true label of the data but the prediction of the other party, as being different from federated learning and prior art that counts on secure multi-party computation. After this protocol, each participant occupies a well trained plaintext model privately. The contributions include the following: (i) the communication cost of SMuLe is lower than state-of-the-art two-party training protocols; (ii) our solution is flexible with different secure inference schemes; (iii) SMuLe can resist malicious attacks through poisoning samples. The experiments show that on CifarlO, SMuLe can obtain desirable accuracy even in a small convnet and the communication cost in each epoch is less than 75 MB as expected.
AB - The paper proposes SMuLe, a secure mutual learning protocol for two-party deep model training, to support low demand for interaction complexity and communication overhead. The strategy is that two parties exchange their blind predictions securely on each other's dataset and the underlying models can thereby benefit from not only the true label of the data but the prediction of the other party, as being different from federated learning and prior art that counts on secure multi-party computation. After this protocol, each participant occupies a well trained plaintext model privately. The contributions include the following: (i) the communication cost of SMuLe is lower than state-of-the-art two-party training protocols; (ii) our solution is flexible with different secure inference schemes; (iii) SMuLe can resist malicious attacks through poisoning samples. The experiments show that on CifarlO, SMuLe can obtain desirable accuracy even in a small convnet and the communication cost in each epoch is less than 75 MB as expected.
KW - Deep Model Training
KW - Mutual Learning
KW - Privacy
KW - Secure multi-party computation
KW - homomorphic encryption
UR - https://www.scopus.com/pages/publications/85197473606
U2 - 10.1109/MSN60784.2023.00087
DO - 10.1109/MSN60784.2023.00087
M3 - 会议稿件
AN - SCOPUS:85197473606
T3 - Proceedings - 2023 19th International Conference on Mobility, Sensing and Networking, MSN 2023
SP - 579
EP - 587
BT - Proceedings - 2023 19th International Conference on Mobility, Sensing and Networking, MSN 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th International Conference on Mobility, Sensing and Networking, MSN 2023
Y2 - 14 December 2023 through 16 December 2023
ER -