摘要
Federated learning (FL) is a framework which is used in distributed machine learning to obtain an optimal model from clients’ local updates. As an effcient design in model convergence and data communication, cloud-edge-client hierarchical federated learning (HFL) attracts more attention than the typical cloud-client architecture. However, the HFL still poses threats to clients’ sensitive data by analyzing the upload and download parameters. In this paper, to address information leakage effectively, we propose a novel privacy-preserving scheme based on the concept of differential privacy (DP), adding Gaussian noises to the shared parameters when uploading them to edge and cloud servers and broadcasting them to clients. Our algorithm can obtain global differential privacy with adjustable noises in the architecture. We evaluate the performance on image classification tasks. In our experiment on the Modified National Institute of Standards and Technology (MNIST) dataset, we get 91% model accuracy. Compared to the previous two-layer HFL-DP, our design is more secure while as being accurate.
| 源语言 | 英语 |
|---|---|
| 页(从-至) | 3741-3758 |
| 页数 | 18 |
| 期刊 | Electronic Research Archive |
| 卷 | 31 |
| 期 | 7 |
| DOI | |
| 出版状态 | 已出版 - 2023 |
指纹
探究 'Hierarchical federated learning with global differential privacy' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver