Secure Mutual Learning with Low Interactions for Deep Model Training

  • Wenxing Zhu*
  • , Xiangxue Li
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

The paper proposes SMuLe, a secure mutual learning protocol for two-party deep model training, to support low demand for interaction complexity and communication overhead. The strategy is that two parties exchange their blind predictions securely on each other's dataset and the underlying models can thereby benefit from not only the true label of the data but the prediction of the other party, as being different from federated learning and prior art that counts on secure multi-party computation. After this protocol, each participant occupies a well trained plaintext model privately. The contributions include the following: (i) the communication cost of SMuLe is lower than state-of-the-art two-party training protocols; (ii) our solution is flexible with different secure inference schemes; (iii) SMuLe can resist malicious attacks through poisoning samples. The experiments show that on CifarlO, SMuLe can obtain desirable accuracy even in a small convnet and the communication cost in each epoch is less than 75 MB as expected.

Original languageEnglish
Title of host publicationProceedings - 2023 19th International Conference on Mobility, Sensing and Networking, MSN 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages579-587
Number of pages9
ISBN (Electronic)9798350358261
DOIs
StatePublished - 2023
Event19th International Conference on Mobility, Sensing and Networking, MSN 2023 - Jiangsu, China
Duration: 14 Dec 202316 Dec 2023

Publication series

NameProceedings - 2023 19th International Conference on Mobility, Sensing and Networking, MSN 2023

Conference

Conference19th International Conference on Mobility, Sensing and Networking, MSN 2023
Country/TerritoryChina
CityJiangsu
Period14/12/2316/12/23

Keywords

  • Deep Model Training
  • Mutual Learning
  • Privacy
  • Secure multi-party computation
  • homomorphic encryption

Fingerprint

Dive into the research topics of 'Secure Mutual Learning with Low Interactions for Deep Model Training'. Together they form a unique fingerprint.

Cite this