Communication Efficient Federated Learning via Channel-wise Dynamic Pruning

  • Bo Tao
  • , Cen Chen*
  • , Huimin Chen
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Federated Learning (FL) received widespread attention in 5G mobile edge networks (MENs) as it enables collaborative training deep learning models without disclosing users' private data. As the increasing number of parameters in the machine learning model poses a tremendous challenge for resource-constrained devices, there is a growing interest in applying model compression methods in federated learning. However, most existing model compression methods require a cumbersome procedure that introduces many additional hyperparameters and much more training time. In this paper, we propose a novel Channel-wise Dynamic Pruning method for communication efficient Federated Learning (FedCDP). The scheme dynamically evaluates the channel-wise parameter importance via a fast Taylor series evaluation and only communicates the important parameters in Federated Learning. Extensive experiments show the proposed method achieves both communication efficiency and model effectiveness in the benchmark datasets. The source codes are available at https://github.com/tabo0/FedCDP.

Original languageEnglish
Title of host publication2023 IEEE Wireless Communications and Networking Conference, WCNC 2023 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665491228
DOIs
StatePublished - 2023
Event2023 IEEE Wireless Communications and Networking Conference, WCNC 2023 - Glasgow, United Kingdom
Duration: 26 Mar 202329 Mar 2023

Publication series

NameIEEE Wireless Communications and Networking Conference, WCNC
Volume2023-March
ISSN (Print)1525-3511

Conference

Conference2023 IEEE Wireless Communications and Networking Conference, WCNC 2023
Country/TerritoryUnited Kingdom
CityGlasgow
Period26/03/2329/03/23

Keywords

  • Federated Learning
  • Model Compression
  • Network Pruning

Fingerprint

Dive into the research topics of 'Communication Efficient Federated Learning via Channel-wise Dynamic Pruning'. Together they form a unique fingerprint.

Cite this