Low rank communication for federated learning

  • Huachi Zhou*
  • , Junhong Cheng
  • , Xiangfeng Wang
  • , Bo Jin
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

Federated learning (FL) aims to learn a model with privacy protection through a distributed scheme over many clients. In FL, an important problem is to reduce the transmission quantity between clients and parameter server during gradient uploading. Because FL environment is not stable and requires enough client responses to be collected within a certain period of time, traditional model compression practices are not entirely suitable for FL setting. For instance, both design of the low-rank filter and the algorithm used to pursue sparse neural network generally need to perform more training rounds locally to ensure that the accuracy of model is not excessively lost. To breakthrough transmission bottleneck, we propose low rank communication Fedlr to compress whole neural network in clients reporting phase. Our innovation is to propose the concept of optimal compression rate. In addition, two measures are introduced to make up accuracy loss caused by truncation: training low rank parameter matrix and using iterative averaging. The algorithm is verified by experimental evaluation on public datasets. In particular, CNN model parameters training on the MNIST dataset can be compressed 32 times and lose only 2% of accuracy.

Original languageEnglish
Title of host publicationDatabase Systems for Advanced Applications. DASFAA 2020 International Workshops - BDMS, SeCoP, BDQM, GDMA, and AIDE, Proceedings
EditorsYunmook Nah, Chulyun Kim, Seon Ho Kim, Yang-Sae Moon, Steven Euijong Whang
PublisherSpringer Science and Business Media Deutschland GmbH
Pages1-16
Number of pages16
ISBN (Print)9783030594121
DOIs
StatePublished - 2020
Event7th International Workshop on Big Data Management and Service, BDMS 2020, 6th International Symposium on Semantic Computing and Personalization, SeCoP 2020, 5th Big Data Quality Management, BDQM 2020, 4th International Workshop on Graph Data Management and Analysis, GDMA 2020, 1st International Workshop on Artificial Intelligence for Data Engineering, AIDE 2020, held in conjunction with the 25th International Conference on Database Systems for Advanced Applications, DASFAA 2020 - Jeju, Korea, Republic of
Duration: 24 Sep 202027 Sep 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12115 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference7th International Workshop on Big Data Management and Service, BDMS 2020, 6th International Symposium on Semantic Computing and Personalization, SeCoP 2020, 5th Big Data Quality Management, BDQM 2020, 4th International Workshop on Graph Data Management and Analysis, GDMA 2020, 1st International Workshop on Artificial Intelligence for Data Engineering, AIDE 2020, held in conjunction with the 25th International Conference on Database Systems for Advanced Applications, DASFAA 2020
Country/TerritoryKorea, Republic of
CityJeju
Period24/09/2027/09/20

Keywords

  • Convolutional neural network
  • Federated learning
  • Low rank approximation
  • Matrix compression
  • Singluar vaue decomposition

Fingerprint

Dive into the research topics of 'Low rank communication for federated learning'. Together they form a unique fingerprint.

Cite this