TY - GEN
T1 - ALI-DPFL
T2 - 25th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2024
AU - Ling, Xinpeng
AU - Fu, Jie
AU - Wang, Kuncan
AU - Liu, Haitao
AU - Chen, Zhili
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data. However, adversaries can still infer individual information through inference attacks (e.g. differential attacks) on these training parameters. As a result, Differential Privacy (DP) has been widely used in FL to prevent such attacks. We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained. By theoretically analyzing the convergence, we can find the optimal number of local Differential Privacy Stochastic Gradient Descent (DPSGD) iterations for clients between any two sequential global updates. Based on this, we design an algorithm of Differentially Private Federated Learning with Adaptive Local Iterations (ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10 datasets, and demonstrate significantly better performances than previous work in the resource-constraint scenario. Code is available at https://github.com/KnightWan/ALI-DPFL.
AB - Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data. However, adversaries can still infer individual information through inference attacks (e.g. differential attacks) on these training parameters. As a result, Differential Privacy (DP) has been widely used in FL to prevent such attacks. We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained. By theoretically analyzing the convergence, we can find the optimal number of local Differential Privacy Stochastic Gradient Descent (DPSGD) iterations for clients between any two sequential global updates. Based on this, we design an algorithm of Differentially Private Federated Learning with Adaptive Local Iterations (ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10 datasets, and demonstrate significantly better performances than previous work in the resource-constraint scenario. Code is available at https://github.com/KnightWan/ALI-DPFL.
KW - adaptive
KW - convergence analysis
KW - differential privacy
KW - federated learning
KW - resource constrained
UR - https://www.scopus.com/pages/publications/85198900789
U2 - 10.1109/WoWMoM60985.2024.00062
DO - 10.1109/WoWMoM60985.2024.00062
M3 - 会议稿件
AN - SCOPUS:85198900789
T3 - Proceedings - 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2024
SP - 349
EP - 358
BT - Proceedings - 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 June 2024 through 7 June 2024
ER -