TY - JOUR
T1 - PI-Fed
T2 - Continual Federated Learning With Parameter-Level Importance Aggregation
AU - Yu, Lang
AU - Ge, Lina
AU - Wang, Guanghui
AU - Yin, Jianghao
AU - Chen, Qin
AU - Zhou, Jie
AU - He, Liang
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated Learning (FL) has drawn much attention for distributed system over the Internet of Things (IoT), since it enables collaborative machine learning on heterogeneous devices while resolves concerns about privacy leakage. Due to the catastrophic forgetting (CF) phenomenon of optimization methods, existing FL approaches are restricted to single task learning and typically assume that data from all nodes are simultaneously available during training. However, in practical IoT scenarios, the data preparation from nodes may be asynchronous, and different tasks require incremental training. To address the issues, we propose a continual FL (CFL) framework with parameter-level importance aggregation (PI-Fed), which supports collaborative task-incremental learning with privacy preservation. Specifically, PI-Fed evaluates the importance of each parameter in the global model to all history tasks, which is computed locally and aggregated at the center server. Then the server performs soft-masking on the averaged gradient collected from local clients based on the parameter importance. By minimizing the change on important parameters, PI-Fed effectively overcomes CF and also achieves high efficiency without experience replay. Extensive experiments on 4 benchmarks with at most 20 sequential tasks demonstrate that our proposed PI-Fed significantly outperforms traditional FL baselines (FedAvg, FedNova, and SCAFFOLD).
AB - Federated Learning (FL) has drawn much attention for distributed system over the Internet of Things (IoT), since it enables collaborative machine learning on heterogeneous devices while resolves concerns about privacy leakage. Due to the catastrophic forgetting (CF) phenomenon of optimization methods, existing FL approaches are restricted to single task learning and typically assume that data from all nodes are simultaneously available during training. However, in practical IoT scenarios, the data preparation from nodes may be asynchronous, and different tasks require incremental training. To address the issues, we propose a continual FL (CFL) framework with parameter-level importance aggregation (PI-Fed), which supports collaborative task-incremental learning with privacy preservation. Specifically, PI-Fed evaluates the importance of each parameter in the global model to all history tasks, which is computed locally and aggregated at the center server. Then the server performs soft-masking on the averaged gradient collected from local clients based on the parameter importance. By minimizing the change on important parameters, PI-Fed effectively overcomes CF and also achieves high efficiency without experience replay. Extensive experiments on 4 benchmarks with at most 20 sequential tasks demonstrate that our proposed PI-Fed significantly outperforms traditional FL baselines (FedAvg, FedNova, and SCAFFOLD).
KW - Continual learning (CL)
KW - federated learning (FL)
KW - first-order optimization
KW - parameter importance
UR - https://www.scopus.com/pages/publications/105002089751
U2 - 10.1109/JIOT.2024.3440029
DO - 10.1109/JIOT.2024.3440029
M3 - 文章
AN - SCOPUS:105002089751
SN - 2327-4662
VL - 11
SP - 37187
EP - 37199
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 22
ER -