TY - GEN
T1 - Adaptive privacy-preserving and shuffling aggregation in federated-learning
AU - Huixian, He
AU - Zhenfu, Cao
N1 - Publisher Copyright:
© 2021 11th International Workshop on Computer Science and Engineering, WCSE 2021. All Rights Reserved.
PY - 2021
Y1 - 2021
N2 - Deep learning models are usually trained on data sets containing sensitive information, such as personal shopping transactions, personal contacts, and medical records. Therefore, more and more important work attempts to train neural networks subject to privacy constraints, which are specified by differential privacy or divergence-based relaxation. However, these privacy definitions have weaknesses in handling certain important primitives (synthesis and sub-sampling), which makes the privacy analysis of training neural networks loose or complex. Federated learning is a popular privacy protection method, which collects local gradient information instead of real data. One way to achieve strict privacy guarantee is to apply differential privacy to federated learning. However, previous work did not give a practical solution. This paper proposes a new type of adaptive privacy-preserving and shuffling aggregation in federated-learning mechanism design. It can make the data more different from its original value and introduce lower variance. In addition, the proposed mechanism is updated through the split and shuffle model, avoiding the curse of dimensionality. A series of empirical evaluations conducted on the three commonly used data sets of MNIST, Fashi-MNIST and CIFAR-10 show that our solution can not only achieve excellent deep learning performance, but also provide strong privacy protection.
AB - Deep learning models are usually trained on data sets containing sensitive information, such as personal shopping transactions, personal contacts, and medical records. Therefore, more and more important work attempts to train neural networks subject to privacy constraints, which are specified by differential privacy or divergence-based relaxation. However, these privacy definitions have weaknesses in handling certain important primitives (synthesis and sub-sampling), which makes the privacy analysis of training neural networks loose or complex. Federated learning is a popular privacy protection method, which collects local gradient information instead of real data. One way to achieve strict privacy guarantee is to apply differential privacy to federated learning. However, previous work did not give a practical solution. This paper proposes a new type of adaptive privacy-preserving and shuffling aggregation in federated-learning mechanism design. It can make the data more different from its original value and introduce lower variance. In addition, the proposed mechanism is updated through the split and shuffle model, avoiding the curse of dimensionality. A series of empirical evaluations conducted on the three commonly used data sets of MNIST, Fashi-MNIST and CIFAR-10 show that our solution can not only achieve excellent deep learning performance, but also provide strong privacy protection.
KW - Federated learning
KW - Privacy preserving
UR - https://www.scopus.com/pages/publications/85114209190
U2 - 10.18178/wcse.2021.06.006
DO - 10.18178/wcse.2021.06.006
M3 - 会议稿件
AN - SCOPUS:85114209190
T3 - 2021 11th International Workshop on Computer Science and Engineering, WCSE 2021
SP - 37
EP - 41
BT - 2021 11th International Workshop on Computer Science and Engineering, WCSE 2021
PB - International Workshop on Computer Science and Engineering (WCSE)
T2 - 2021 11th International Workshop on Computer Science and Engineering, WCSE 2021
Y2 - 19 June 2021 through 21 June 2021
ER -