TY - GEN
T1 - Low rank communication for federated learning
AU - Zhou, Huachi
AU - Cheng, Junhong
AU - Wang, Xiangfeng
AU - Jin, Bo
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Federated learning (FL) aims to learn a model with privacy protection through a distributed scheme over many clients. In FL, an important problem is to reduce the transmission quantity between clients and parameter server during gradient uploading. Because FL environment is not stable and requires enough client responses to be collected within a certain period of time, traditional model compression practices are not entirely suitable for FL setting. For instance, both design of the low-rank filter and the algorithm used to pursue sparse neural network generally need to perform more training rounds locally to ensure that the accuracy of model is not excessively lost. To breakthrough transmission bottleneck, we propose low rank communication Fedlr to compress whole neural network in clients reporting phase. Our innovation is to propose the concept of optimal compression rate. In addition, two measures are introduced to make up accuracy loss caused by truncation: training low rank parameter matrix and using iterative averaging. The algorithm is verified by experimental evaluation on public datasets. In particular, CNN model parameters training on the MNIST dataset can be compressed 32 times and lose only 2% of accuracy.
AB - Federated learning (FL) aims to learn a model with privacy protection through a distributed scheme over many clients. In FL, an important problem is to reduce the transmission quantity between clients and parameter server during gradient uploading. Because FL environment is not stable and requires enough client responses to be collected within a certain period of time, traditional model compression practices are not entirely suitable for FL setting. For instance, both design of the low-rank filter and the algorithm used to pursue sparse neural network generally need to perform more training rounds locally to ensure that the accuracy of model is not excessively lost. To breakthrough transmission bottleneck, we propose low rank communication Fedlr to compress whole neural network in clients reporting phase. Our innovation is to propose the concept of optimal compression rate. In addition, two measures are introduced to make up accuracy loss caused by truncation: training low rank parameter matrix and using iterative averaging. The algorithm is verified by experimental evaluation on public datasets. In particular, CNN model parameters training on the MNIST dataset can be compressed 32 times and lose only 2% of accuracy.
KW - Convolutional neural network
KW - Federated learning
KW - Low rank approximation
KW - Matrix compression
KW - Singluar vaue decomposition
UR - https://www.scopus.com/pages/publications/85092137654
U2 - 10.1007/978-3-030-59413-8_1
DO - 10.1007/978-3-030-59413-8_1
M3 - 会议稿件
AN - SCOPUS:85092137654
SN - 9783030594121
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
EP - 16
BT - Database Systems for Advanced Applications. DASFAA 2020 International Workshops - BDMS, SeCoP, BDQM, GDMA, and AIDE, Proceedings
A2 - Nah, Yunmook
A2 - Kim, Chulyun
A2 - Kim, Seon Ho
A2 - Moon, Yang-Sae
A2 - Whang, Steven Euijong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th International Workshop on Big Data Management and Service, BDMS 2020, 6th International Symposium on Semantic Computing and Personalization, SeCoP 2020, 5th Big Data Quality Management, BDQM 2020, 4th International Workshop on Graph Data Management and Analysis, GDMA 2020, 1st International Workshop on Artificial Intelligence for Data Engineering, AIDE 2020, held in conjunction with the 25th International Conference on Database Systems for Advanced Applications, DASFAA 2020
Y2 - 24 September 2020 through 27 September 2020
ER -