POP-FL: Towards Efficient Federated Learning on Edge Using Parallel Over-Parameterization

  • Xingjian Lu
  • , Haikun Zheng
  • , Wenyan Liu*
  • , Yuhui Jiang
  • , Hongyue Wu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Federated Learning (FL) is a promising paradigm for mining massive data while respecting users' privacy. However, the deployment of FL on resource-constrained edge devices remains elusive due to its high resource demand. In this paper, unlike existing works that use expensive dense models, we propose to utilize dynamic sparse training in FL and design a novel sparse-to-sparse FL framework, named as POP-FL. The framework can reduce both computation and communication overheads while maintaining the performance of the global model. Specifically, POP-FL partitions massive clients into groups and performs parallel parameter exploration, i.e., Parallel Over-Parameterization, over the collaboration between these groups. This exploration can greatly improve the expressibility and generalizability of sparse training in FL (especially for extreme sparsity levels) through reliably covering sufficient parameters and dynamically updating the global sparse network's structure during the training process. Experimental results show that compared with existing sparse-to-sparse training methods in both iid and non-iid data distribution, POP-FL achieves the best inference accuracy on various representative networks.

Original languageEnglish
Pages (from-to)617-630
Number of pages14
JournalIEEE Transactions on Services Computing
Volume17
Issue number2
DOIs
StatePublished - 1 Mar 2024

Keywords

  • Distributed machine learning
  • edge computing
  • federated learning
  • model sparse training

Fingerprint

Dive into the research topics of 'POP-FL: Towards Efficient Federated Learning on Edge Using Parallel Over-Parameterization'. Together they form a unique fingerprint.

Cite this