TY - GEN
T1 - Multi-Width Neural Network-Assisted Hierarchical Federated Learning in Heterogeneous Cloud-Edge-Device Computing
AU - Wang, Haizhou
AU - Zou, Guobing
AU - Xu, Fei
AU - Cui, Yangguang
AU - Wei, Tongquan
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - Federated learning (FL), an emerging data-secure distributed training paradigm, unites massive isolated Internet of Things (IoT) device nodes to collaboratively train a global neural network (NN) model without the exposure of their local multimedia data. However, constrained by the synchronous NN model integration nature of FL, there is a training latency inconsistency among heterogeneous devices, which significantly deteriorates FL training efficiency. Meanwhile, frequent local NN training and transmission impose high energy consumption pressure on users. To tackle these issues, this paper proposes a premium multi-width NN-assisted hierarchical FL (HFL) framework in heterogeneous cloud-edge-device computing to achieve remarkable training speedup and energy conservation. Specifically, a heterogeneity-aware NN width coefficient determination algorithm, which flexibly assigns a subnet with a suitable width to each user device based on its computing ability, is first applied to shorten the HFL training latency. Subsequently, to integrate subnets with different width topologies, we design a width-aware adaptive NN model integration approach to effectively ensure the accuracy of the integrated global NN model. Finally, a latency-aware energy saving strategy is introduced to reduce energy consumption. Experimental results demonstrate that our proposed framework outperforms state-of-the-art benchmarks, and attains up to 42.42% enhancement in accuracy, 81.5% reduction in training latency, and 40.9% optimization in energy cost.
AB - Federated learning (FL), an emerging data-secure distributed training paradigm, unites massive isolated Internet of Things (IoT) device nodes to collaboratively train a global neural network (NN) model without the exposure of their local multimedia data. However, constrained by the synchronous NN model integration nature of FL, there is a training latency inconsistency among heterogeneous devices, which significantly deteriorates FL training efficiency. Meanwhile, frequent local NN training and transmission impose high energy consumption pressure on users. To tackle these issues, this paper proposes a premium multi-width NN-assisted hierarchical FL (HFL) framework in heterogeneous cloud-edge-device computing to achieve remarkable training speedup and energy conservation. Specifically, a heterogeneity-aware NN width coefficient determination algorithm, which flexibly assigns a subnet with a suitable width to each user device based on its computing ability, is first applied to shorten the HFL training latency. Subsequently, to integrate subnets with different width topologies, we design a width-aware adaptive NN model integration approach to effectively ensure the accuracy of the integrated global NN model. Finally, a latency-aware energy saving strategy is introduced to reduce energy consumption. Experimental results demonstrate that our proposed framework outperforms state-of-the-art benchmarks, and attains up to 42.42% enhancement in accuracy, 81.5% reduction in training latency, and 40.9% optimization in energy cost.
KW - energy cost
KW - hierarchical federated learning
KW - multi-width neural network
KW - system heterogeneity
KW - training latency
UR - https://www.scopus.com/pages/publications/105024077478
U2 - 10.1145/3746027.3754596
DO - 10.1145/3746027.3754596
M3 - 会议稿件
AN - SCOPUS:105024077478
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 11966
EP - 11975
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
T2 - 33rd ACM International Conference on Multimedia, MM 2025
Y2 - 27 October 2025 through 31 October 2025
ER -