TY - GEN
T1 - Satellite Federated Fine-Tuning for Foundation Models
T2 - 2024 IEEE Global Communications Conference, GLOBECOM 2024
AU - Zhu, Yan
AU - Yang, Peng
AU - Zhu, Jingyang
AU - Wen, Dingzhu
AU - Wang, Ting
AU - Zhou, Yong
AU - Shi, Yuanming
AU - Jiang, Chunxiao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - With the surge in the number of low earth orbit (LEO) satellites, continuous research has emerged on using satellite data to train artificial intelligence models. On one hand, traditional centralized training on the ground is not feasible due to privacy concerns and limited bandwidth for downloading raw satellite data. On the other hand, due to the limited energy and computational capability of satellites, training directly on satellites suffers from prolonged latency, especially for large models. To alleviate these issues, we propose a novel satellite-ground collaborative federated fine-tuning architecture, where ground stations (GSs) and satellites collaboratively train a global model without the need for data downloads. In this proposed architecture, satellites serve as edge devices and the ground server serves as a coordinator. However, the short satellite-ground communication windows caused by the high mobility of satellites and the substantial intra-orbit data transmission bring special challenges to the transmission process of federated edge learning. To tackle these challenges, we carefully design the satellite-ground collaborative fine-tuning architecture and utilize an optimized ring all-reduce algorithm and network flow algorithm to enhance the intra-orbit and ground-satellite transmissions, respectively. Experimental results demonstrate that our proposed architecture significantly reduces the training time by 40% compared to training solely on satellite.
AB - With the surge in the number of low earth orbit (LEO) satellites, continuous research has emerged on using satellite data to train artificial intelligence models. On one hand, traditional centralized training on the ground is not feasible due to privacy concerns and limited bandwidth for downloading raw satellite data. On the other hand, due to the limited energy and computational capability of satellites, training directly on satellites suffers from prolonged latency, especially for large models. To alleviate these issues, we propose a novel satellite-ground collaborative federated fine-tuning architecture, where ground stations (GSs) and satellites collaboratively train a global model without the need for data downloads. In this proposed architecture, satellites serve as edge devices and the ground server serves as a coordinator. However, the short satellite-ground communication windows caused by the high mobility of satellites and the substantial intra-orbit data transmission bring special challenges to the transmission process of federated edge learning. To tackle these challenges, we carefully design the satellite-ground collaborative fine-tuning architecture and utilize an optimized ring all-reduce algorithm and network flow algorithm to enhance the intra-orbit and ground-satellite transmissions, respectively. Experimental results demonstrate that our proposed architecture significantly reduces the training time by 40% compared to training solely on satellite.
UR - https://www.scopus.com/pages/publications/105000825669
U2 - 10.1109/GLOBECOM52923.2024.10901682
DO - 10.1109/GLOBECOM52923.2024.10901682
M3 - 会议稿件
AN - SCOPUS:105000825669
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 5030
EP - 5035
BT - GLOBECOM 2024 - 2024 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 8 December 2024 through 12 December 2024
ER -