跳到主要导航 跳到搜索 跳到主要内容

When foresight pruning meets zeroth-order optimization: Efficient federated learning for low-memory devices

  • Pengyu Zhang
  • , Yingjie Liu
  • , Yingbo Zhou
  • , Xian Wei
  • , Mingsong Chen*
  • *此作品的通讯作者
  • East China Normal University

科研成果: 期刊稿件文章同行评审

摘要

To facilitate Federated Learning (FL) on low-memory embedded devices, various federated pruning methods aim to reduce memory usage during inference but have a limited impact on training memory burdens. Alternatively, zeroth-order or backpropagation-free (BP-Free) methods can partially alleviate memory consumption but still face computational overhead as the number of model parameters increases. To address these issues, we propose a memory-efficient federated foresight pruning method based on the Neural Tangent Kernel (NTK), which seamlessly integrates with federated BP-Free training frameworks. We approximate federated NTK using local NTK matrices and demonstrate that the data-free property of our method significantly reduces approximation error in highly heterogeneous data scenarios. Our method improves the vanilla BP-Free method with fewer floating point operations (FLOPs) and alleviates memory pressure during pruning and training, making FL more feasible for low-memory devices. Experimental results on simulation- and real test-bed-based platforms show that our method improves the accuracy by up to 6.35% and reduces the FLOPs by up to 57% against the vanilla BP-Free method while maintaining the same 9× memory usage saving.

源语言英语
文章编号103526
期刊Journal of Systems Architecture
168
DOI
出版状态已出版 - 11月 2025

指纹

探究 'When foresight pruning meets zeroth-order optimization: Efficient federated learning for low-memory devices' 的科研主题。它们共同构成独一无二的指纹。

引用此