TY - GEN
T1 - UPFL
T2 - 2024 SIAM International Conference on Data Mining, SDM 2024
AU - Ye, Tiandi
AU - Chen, Cen
AU - Wang, Yinggui
AU - Li, Xiang
AU - Gao, Ming
N1 - Publisher Copyright:
Copyright © 2024 by SIAM.
PY - 2024
Y1 - 2024
N2 - Personalized federated learning (pFL) has gained significant attention as a promising approach to address the challenge of data heterogeneity. In this paper, we address a relatively unexplored problem in federated learning. When a federated model has been trained and deployed, and an unlabeled new client joins, providing a personalized model for the new client becomes a highly challenging task. To address this challenge, we extend the adaptive risk minimization technique into the unsupervised pFL setting and propose our method, FedTTA. We further improve FedTTA with two simple yet highly effective optimization strategies: enhancing the training of the adaptation model with proxy regularization and early-stopping the adaptation through entropy. Moreover, we propose a knowledge distillation loss specifically designed for FedTTA to address the device heterogeneity. Extensive experiments on five datasets against eleven baselines demonstrate the effectiveness of our proposed FedTTA and its variants. The code is available at: https://github.com/anonymous-federated-learning/code.
AB - Personalized federated learning (pFL) has gained significant attention as a promising approach to address the challenge of data heterogeneity. In this paper, we address a relatively unexplored problem in federated learning. When a federated model has been trained and deployed, and an unlabeled new client joins, providing a personalized model for the new client becomes a highly challenging task. To address this challenge, we extend the adaptive risk minimization technique into the unsupervised pFL setting and propose our method, FedTTA. We further improve FedTTA with two simple yet highly effective optimization strategies: enhancing the training of the adaptation model with proxy regularization and early-stopping the adaptation through entropy. Moreover, we propose a knowledge distillation loss specifically designed for FedTTA to address the device heterogeneity. Extensive experiments on five datasets against eleven baselines demonstrate the effectiveness of our proposed FedTTA and its variants. The code is available at: https://github.com/anonymous-federated-learning/code.
KW - heterogeneous federated learning
KW - personalized federated learning
KW - unsupervised learning
UR - https://www.scopus.com/pages/publications/85193542487
M3 - 会议稿件
AN - SCOPUS:85193542487
T3 - Proceedings of the 2024 SIAM International Conference on Data Mining, SDM 2024
SP - 851
EP - 859
BT - Proceedings of the 2024 SIAM International Conference on Data Mining, SDM 2024
A2 - Shekhar, Shashi
A2 - Papalexakis, Vagelis
A2 - Gao, Jing
A2 - Jiang, Zhe
A2 - Riondato, Matteo
PB - Society for Industrial and Applied Mathematics Publications
Y2 - 18 April 2024 through 20 April 2024
ER -