TY - JOUR
T1 - OpenVFL
T2 - A Vertical Federated Learning Framework With Stronger Privacy-Preserving
AU - Yang, Yunbo
AU - Chen, Xiang
AU - Pan, Yuhao
AU - Shen, Jiachen
AU - Cao, Zhenfu
AU - Dong, Xiaolei
AU - Li, Xiaoguo
AU - Sun, Jianfei
AU - Yang, Guomin
AU - Deng, Robert
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.
AB - Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.
KW - Federated learning
KW - multiparty computation
KW - private set intersection
UR - https://www.scopus.com/pages/publications/85206993078
U2 - 10.1109/TIFS.2024.3477924
DO - 10.1109/TIFS.2024.3477924
M3 - 文章
AN - SCOPUS:85206993078
SN - 1556-6013
VL - 19
SP - 9670
EP - 9681
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -