TY - JOUR
T1 - S2FL
T2 - Toward Efficient and Accurate Heterogeneous Split Federated Learning
AU - Yan, Dengke
AU - Hu, Ming
AU - Xie, Xiaofei
AU - Yang, Yanxin
AU - Chen, Mingsong
N1 - Publisher Copyright:
© 1968-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Along with the prosperity of Artificial Intelligence (AI) and Internet of Things (IoT) techniques, Split Federated Learning (SFL) is becoming popular in designing Artificial Intelligence of Things (AIoT) applications, since it enables knowledge sharing among resource-constrained devices without compromising data privacy. By offloading a portion of the full model onto cloud servers, SFL can not only enable AIoT devices to accommodate large models beyond their capabilities, but also reduce their overall local training efforts. However, due to various inherent data and device heterogeneity issues, existing SFL methods greatly suffer from low inference performance and slow convergence, especially when the network bandwidth of devices is limited. To address these problems, this paper presents a novel SFL approach named Sliding Split Federated Learning (S2FL). Unlike traditional SFL methods that train the same portion of models on each device, S2FL maintains different model portions on heterogeneous AIoT devices adaptively according to their current computing capability and network bandwidth based on our proposed adaptive model sliding split method, which can balance the training time between devices and mitigate the notorious straggler problem caused by devices with weak computation power and long network transmission time. Meanwhile, based on our proposed data balance-aware training mechanism, S2FL enables the training of the server model portion on the balanced local data of grouped devices, thus alleviating the degradation of inference accuracy caused by data heterogeneity. Comprehensive experimental results highlight the superiority of S2FL over conventional SFL methods, where S2FL can achieve up to 11.58% inference accuracy improvement and 3.82× training acceleration.
AB - Along with the prosperity of Artificial Intelligence (AI) and Internet of Things (IoT) techniques, Split Federated Learning (SFL) is becoming popular in designing Artificial Intelligence of Things (AIoT) applications, since it enables knowledge sharing among resource-constrained devices without compromising data privacy. By offloading a portion of the full model onto cloud servers, SFL can not only enable AIoT devices to accommodate large models beyond their capabilities, but also reduce their overall local training efforts. However, due to various inherent data and device heterogeneity issues, existing SFL methods greatly suffer from low inference performance and slow convergence, especially when the network bandwidth of devices is limited. To address these problems, this paper presents a novel SFL approach named Sliding Split Federated Learning (S2FL). Unlike traditional SFL methods that train the same portion of models on each device, S2FL maintains different model portions on heterogeneous AIoT devices adaptively according to their current computing capability and network bandwidth based on our proposed adaptive model sliding split method, which can balance the training time between devices and mitigate the notorious straggler problem caused by devices with weak computation power and long network transmission time. Meanwhile, based on our proposed data balance-aware training mechanism, S2FL enables the training of the server model portion on the balanced local data of grouped devices, thus alleviating the degradation of inference accuracy caused by data heterogeneity. Comprehensive experimental results highlight the superiority of S2FL over conventional SFL methods, where S2FL can achieve up to 11.58% inference accuracy improvement and 3.82× training acceleration.
KW - AIoT System
KW - Data Heterogeneity
KW - Deep Learning
KW - Split Federated Learning
KW - Straggler Problem
UR - https://www.scopus.com/pages/publications/105020705897
U2 - 10.1109/TC.2025.3626198
DO - 10.1109/TC.2025.3626198
M3 - 文章
AN - SCOPUS:105020705897
SN - 0018-9340
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
ER -