TY - JOUR
T1 - FedGraft
T2 - Memory-Aware Heterogeneous Federated Learning via Model Grafting
AU - Liu, Ruixuan
AU - Hu, Ming
AU - Xia, Zeke
AU - Xie, Xiaofei
AU - Xia, Jun
AU - Zhang, Pengyu
AU - Huang, Yihao
AU - Chen, Mingsong
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Although Federated Learning (FL) is good at collaborative learning among devices without compromising their data privacy, it suffers from the problem of large-scale deployment in Mobile Edge Computing (MEC) applications. This is mainly because the varying memory sizes of edge devices inevitably result in limited sizes of their hosting models. According to the Cannikin Law, when dealing with heterogeneous devices with different memory sizes, the learning capability of existing homogeneous FL schemes is greatly restricted by the weakest device. Worse still, although existing heterogeneous FL methods enable a MEC application to involve numerous devices equipped with heterogeneous models, their knowledge aggregation processes require either extra training data or architecture similarity of models. To address the above issues, this paper presents a novel FL method named FedGraft that enables effective knowledge sharing among heterogeneous device models of different sizes without imposing unrealistic assumptions. In FedGraft, all the device models are grafted to a common rootstock based on our proposed model partitioning and grafting mechanism, facilitating knowledge sharing among heterogeneous models on top of a tree-like global model. Meanwhile, using our proposed device selection strategy, the reassembled submodels extracted from the global model can be reasonably dispatched to corresponding devices with sufficient memory, thus enhancing the overall FL performance. Comprehensive experimental results show that, compared with state-of-the-art heterogeneous FL methods, FedGraft can improve inference accuracy by up to 17% in various memory-constrained scenarios.
AB - Although Federated Learning (FL) is good at collaborative learning among devices without compromising their data privacy, it suffers from the problem of large-scale deployment in Mobile Edge Computing (MEC) applications. This is mainly because the varying memory sizes of edge devices inevitably result in limited sizes of their hosting models. According to the Cannikin Law, when dealing with heterogeneous devices with different memory sizes, the learning capability of existing homogeneous FL schemes is greatly restricted by the weakest device. Worse still, although existing heterogeneous FL methods enable a MEC application to involve numerous devices equipped with heterogeneous models, their knowledge aggregation processes require either extra training data or architecture similarity of models. To address the above issues, this paper presents a novel FL method named FedGraft that enables effective knowledge sharing among heterogeneous device models of different sizes without imposing unrealistic assumptions. In FedGraft, all the device models are grafted to a common rootstock based on our proposed model partitioning and grafting mechanism, facilitating knowledge sharing among heterogeneous models on top of a tree-like global model. Meanwhile, using our proposed device selection strategy, the reassembled submodels extracted from the global model can be reasonably dispatched to corresponding devices with sufficient memory, thus enhancing the overall FL performance. Comprehensive experimental results show that, compared with state-of-the-art heterogeneous FL methods, FedGraft can improve inference accuracy by up to 17% in various memory-constrained scenarios.
KW - Federated learning (FL)
KW - knowledge aggregation
KW - memory constraint
KW - mobile edge computing (MEC)
KW - model grafting
UR - https://www.scopus.com/pages/publications/105012576987
U2 - 10.1109/TMC.2025.3591537
DO - 10.1109/TMC.2025.3591537
M3 - 文章
AN - SCOPUS:105012576987
SN - 1536-1233
VL - 24
SP - 13506
EP - 13519
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 12
ER -