TY - GEN
T1 - Towards Instance-wise Personalized Federated Learning via Semi-Implicit Bayesian Prompt Tuning
AU - Ye, Tiandi
AU - Liu, Wenyan
AU - Yao, Kai
AU - Li, Lichun
AU - Su, Shangchao
AU - Chen, Cen
AU - Li, Xiang
AU - Yin, Shan
AU - Gao, Ming
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/11/10
Y1 - 2025/11/10
N2 - Federated learning (FL) is a privacy-preserving machine learning paradigm that enables collaborative model training across multiple distributed clients without disclosing their raw data. Personalized federated learning (pFL) has gained increasing attention for its ability to address data heterogeneity. However, most existing pFL methods assume that each client's data follows a single distribution and learn one client-level personalized model for each client. This assumption often fails in practice, where a single client may possess data from multiple sources or domains, resulting in significant intra-client heterogeneity and suboptimal performance. To tackle this challenge, we propose pFedBayesPT, a fine-grained instance-wise pFL framework based on visual prompt tuning. Specifically, we formulate instance-wise prompt generation from a Bayesian perspective and model the prompt posterior as an implicit distribution to capture diverse visual semantics. We derive a variational training objective under the semi-implicit variational inference framework. Extensive experiments on benchmark datasets demonstrate that pFedBayesPT consistently outperforms existing pFL methods under both feature and label heterogeneity settings.
AB - Federated learning (FL) is a privacy-preserving machine learning paradigm that enables collaborative model training across multiple distributed clients without disclosing their raw data. Personalized federated learning (pFL) has gained increasing attention for its ability to address data heterogeneity. However, most existing pFL methods assume that each client's data follows a single distribution and learn one client-level personalized model for each client. This assumption often fails in practice, where a single client may possess data from multiple sources or domains, resulting in significant intra-client heterogeneity and suboptimal performance. To tackle this challenge, we propose pFedBayesPT, a fine-grained instance-wise pFL framework based on visual prompt tuning. Specifically, we formulate instance-wise prompt generation from a Bayesian perspective and model the prompt posterior as an implicit distribution to capture diverse visual semantics. We derive a variational training objective under the semi-implicit variational inference framework. Extensive experiments on benchmark datasets demonstrate that pFedBayesPT consistently outperforms existing pFL methods under both feature and label heterogeneity settings.
KW - federated learning
KW - instance-wise personalization
KW - prompt tuning
KW - variational inference
UR - https://www.scopus.com/pages/publications/105023140365
U2 - 10.1145/3746252.3761097
DO - 10.1145/3746252.3761097
M3 - 会议稿件
AN - SCOPUS:105023140365
T3 - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
SP - 3877
EP - 3887
BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery, Inc
T2 - 34th ACM International Conference on Information and Knowledge Management, CIKM 2025
Y2 - 10 November 2025 through 14 November 2025
ER -