TY - JOUR
T1 - Hybrid differential privacy based federated learning for Internet of Things
AU - Liu, Wenyan
AU - Cheng, Junhong
AU - Wang, Xiaoling
AU - Lu, Xingjian
AU - Yin, Jianwei
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/3
Y1 - 2022/3
N2 - Wireless sensor networks have been widely used to achieve fine-grained information collection. However, numerous data acquisition and processing of sensors bring some privacy issues. Federated learning is a promising and privacy-friendly framework that trains a model across multiple devices or edge nodes holding local data samples without transferring their data to the server. It is not enough to protect privacy only by maintaining data locality, so differential privacy technology is often used to protect privacy in federated learning. However, different users have different privacy requirements, so it is inappropriate to use the same privacy protection scheme, assuming that all users trust or distrust the server. The former has poor accuracy, while the latter has poor privacy. This paper proposes a secure and reliable federated learning algorithm by integrating hybrid differential privacy into federated learning. We divide users into two categories according to their different privacy needs. In addition, we analyze the convergence and privacy bounds of the proposed algorithm and propose an adaptive gradient clip scheme and improved composition method to reduce the effects of noise and clip, respectively. The validity of the algorithm is verified by theoretical analysis and experimental evaluation on real-world datasets.
AB - Wireless sensor networks have been widely used to achieve fine-grained information collection. However, numerous data acquisition and processing of sensors bring some privacy issues. Federated learning is a promising and privacy-friendly framework that trains a model across multiple devices or edge nodes holding local data samples without transferring their data to the server. It is not enough to protect privacy only by maintaining data locality, so differential privacy technology is often used to protect privacy in federated learning. However, different users have different privacy requirements, so it is inappropriate to use the same privacy protection scheme, assuming that all users trust or distrust the server. The former has poor accuracy, while the latter has poor privacy. This paper proposes a secure and reliable federated learning algorithm by integrating hybrid differential privacy into federated learning. We divide users into two categories according to their different privacy needs. In addition, we analyze the convergence and privacy bounds of the proposed algorithm and propose an adaptive gradient clip scheme and improved composition method to reduce the effects of noise and clip, respectively. The validity of the algorithm is verified by theoretical analysis and experimental evaluation on real-world datasets.
KW - Convergence performance
KW - Differential privacy
KW - Federated learning
KW - Privacy protection
UR - https://www.scopus.com/pages/publications/85124268173
U2 - 10.1016/j.sysarc.2022.102418
DO - 10.1016/j.sysarc.2022.102418
M3 - 文章
AN - SCOPUS:85124268173
SN - 1383-7621
VL - 124
JO - Journal of Systems Architecture
JF - Journal of Systems Architecture
M1 - 102418
ER -