TY - JOUR
T1 - FlocOff
T2 - Data Heterogeneity Resilient Federated Learning With Communication-Efficient Edge Offloading
AU - Ma, Mulei
AU - Gong, Chenyu
AU - Zeng, Liekang
AU - Yang, Yang
AU - Wu, Liantao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/11
Y1 - 2024/11
N2 - Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes Federated learning based on computing Offloading (FlocOff) framework, to address data heterogeneity and resource-constrained challenges. Specifically, FlocOff formulates the FL process with Non-IID data in edge scenarios and derives rigorous analysis on the impact of imbalanced data distribution. Based on this, FlocOff decouples the optimization in two steps, namely: 1) Minimizes the Kullback-Leibler (KL) divergence via Computation Offloading scheduling (MKL-CO); 2) Minimizes the Communication Cost through Resource Allocation (MCC-RA). Extensive experimental results demonstrate that the proposed FlocOff effectively improves model convergence and accuracy by 14.3%-32.7% while reducing data heterogeneity under various data distributions.
AB - Federated Learning (FL) has emerged as a fundamental learning paradigm to harness massive data scattered at geo-distributed edge devices in a privacy-preserving way. Given the heterogeneous deployment of edge devices, however, their data are usually Non-IID, introducing significant challenges to FL including degraded training accuracy, intensive communication costs, and high computing complexity. Towards that, traditional approaches typically utilize adaptive mechanisms, which may suffer from scalability issues, increased computational overhead, and limited adaptability to diverse edge environments. To address that, this paper instead leverages the observation that the computation offloading involves inherent functionalities such as node matching and service correlation to achieve data reshaping and proposes Federated learning based on computing Offloading (FlocOff) framework, to address data heterogeneity and resource-constrained challenges. Specifically, FlocOff formulates the FL process with Non-IID data in edge scenarios and derives rigorous analysis on the impact of imbalanced data distribution. Based on this, FlocOff decouples the optimization in two steps, namely: 1) Minimizes the Kullback-Leibler (KL) divergence via Computation Offloading scheduling (MKL-CO); 2) Minimizes the Communication Cost through Resource Allocation (MCC-RA). Extensive experimental results demonstrate that the proposed FlocOff effectively improves model convergence and accuracy by 14.3%-32.7% while reducing data heterogeneity under various data distributions.
KW - Federated learning
KW - computation offloading
KW - edge computing
KW - resource allocation
UR - https://www.scopus.com/pages/publications/85199412590
U2 - 10.1109/JSAC.2024.3431526
DO - 10.1109/JSAC.2024.3431526
M3 - 文章
AN - SCOPUS:85199412590
SN - 0733-8716
VL - 42
SP - 3262
EP - 3277
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 11
ER -