TY - GEN
T1 - Accelerating Wireless Distributed Learning through Hybrid Split and Federated Learning
AU - Li, Xuefei
AU - Guo, Kun
AU - Wang, Xijun
AU - Gao, Ruifeng
AU - Yang, Howard H.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) and split learning (SL) are two prominent distributed learning modes. FL allows for parallel training but demands significant computational resources on devices to train deep neural network models. Conversely, SL reduces the computational burden on devices and can enhance learning performance, though it often leads to longer training time due to its sequential nature. In this paper, we introduce a novel distributed learning framework, hybrid split and federated learning (HSFL), which combines the advantages of both FL and SL over wireless networks. To achieve a lower training loss within a shorter latency, we start with the convergence analysis of HSFL, followed by a joint optimization problem of the learning mode selection, model splitting, and bandwidth allocation. To solve the problem, we propose a two-stage algorithm. First, we find the optimal bandwidth allocation and model splitting with a fixed learning mode. Then, we select the optimal learning mode based on the above optimal values. Experimental results validate the superior learning efficacy of our proposed algorithm.
AB - Federated learning (FL) and split learning (SL) are two prominent distributed learning modes. FL allows for parallel training but demands significant computational resources on devices to train deep neural network models. Conversely, SL reduces the computational burden on devices and can enhance learning performance, though it often leads to longer training time due to its sequential nature. In this paper, we introduce a novel distributed learning framework, hybrid split and federated learning (HSFL), which combines the advantages of both FL and SL over wireless networks. To achieve a lower training loss within a shorter latency, we start with the convergence analysis of HSFL, followed by a joint optimization problem of the learning mode selection, model splitting, and bandwidth allocation. To solve the problem, we propose a two-stage algorithm. First, we find the optimal bandwidth allocation and model splitting with a fixed learning mode. Then, we select the optimal learning mode based on the above optimal values. Experimental results validate the superior learning efficacy of our proposed algorithm.
KW - Federated learning
KW - bandwidth allocation
KW - leaning mode selection
KW - model splitting
KW - split learning
UR - https://www.scopus.com/pages/publications/105000820327
U2 - 10.1109/GLOBECOM52923.2024.10901537
DO - 10.1109/GLOBECOM52923.2024.10901537
M3 - 会议稿件
AN - SCOPUS:105000820327
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 806
EP - 811
BT - GLOBECOM 2024 - 2024 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Global Communications Conference, GLOBECOM 2024
Y2 - 8 December 2024 through 12 December 2024
ER -