TY - JOUR
T1 - UEFL
T2 - Universal and Efficient Privacy-Preserving Federated Learning
AU - Li, Zhiqiang
AU - Bao, Haiyong
AU - Pan, Hao
AU - Guan, Menghong
AU - Huang, Cheng
AU - Dai, Hong Ning
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) is a distributed machine learning framework that allows for model training across multiple clients without requiring access to their local data. However, FL poses some risks, for example, curious clients might conduct inference attacks (e.g., membership inference attacks, model-inversion attacks) to extract sensitive information from other participants. Existing solutions typically fail to strike a good balance between performance and privacy, or are only applicable to specific FL scenarios. To address these challenges, we propose a universal and efficient privacy-preserving FL framework based on matrix theory. Specifically, we design the improved extended hill cryptosystem (IEHC), which efficiently encrypts model parameters while supporting the secure ReLU function. To accommodate different training tasks, we design the secure loss function computation (SLFC) protocol, which computes derivatives of various loss functions while maintaining data privacy of both client and server. And we implement SLFC specifically for three classic loss functions, including MSE, Cross Entropy, and L_{1} . Extensive experimental results demonstrate that our approach robustly defends against various inference attacks. Furthermore, model training experiments conducted in various FL scenarios indicate that our method shows significant advantages across most metrics.
AB - Federated learning (FL) is a distributed machine learning framework that allows for model training across multiple clients without requiring access to their local data. However, FL poses some risks, for example, curious clients might conduct inference attacks (e.g., membership inference attacks, model-inversion attacks) to extract sensitive information from other participants. Existing solutions typically fail to strike a good balance between performance and privacy, or are only applicable to specific FL scenarios. To address these challenges, we propose a universal and efficient privacy-preserving FL framework based on matrix theory. Specifically, we design the improved extended hill cryptosystem (IEHC), which efficiently encrypts model parameters while supporting the secure ReLU function. To accommodate different training tasks, we design the secure loss function computation (SLFC) protocol, which computes derivatives of various loss functions while maintaining data privacy of both client and server. And we implement SLFC specifically for three classic loss functions, including MSE, Cross Entropy, and L_{1} . Extensive experimental results demonstrate that our approach robustly defends against various inference attacks. Furthermore, model training experiments conducted in various FL scenarios indicate that our method shows significant advantages across most metrics.
KW - Federated learning (FL)
KW - inference attacks
KW - matrix theory
KW - privacy-preservation
UR - https://www.scopus.com/pages/publications/85214511877
U2 - 10.1109/JIOT.2025.3525731
DO - 10.1109/JIOT.2025.3525731
M3 - 文章
AN - SCOPUS:85214511877
SN - 2327-4662
VL - 12
SP - 14333
EP - 14347
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 10
ER -