TY - GEN
T1 - When Adversarial Example Attacks Meet Vertical Federated Learning
AU - Meng, Dan
AU - Fu, Zhihui
AU - Kong, Chao
AU - Qi, Yue
AU - Cao, Guitao
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Federated learning (FL) is proposed to enable efficient machine learning while protecting the privacy of user data. In non-federated scenarios, machine learning models are vulnerable to adversarial attacks. The attacker can construct elaborate adversarial examples, which can greatly reduce the performance of the victim model. It is worth exploring how FL models differ from traditional models in face of complex adversarial attacks. We are also curious about the robustness of different FL modeling methods against adversarial attacks, providing suggestions for selecting suitable FL models in practical applications. Consequently, we perform adversarial example attacks on vertical federated learning and design experiments to study how vertical federated models perform under adversarial example attacks. The simulation results show that federated models are more robust than non-federated models. Besides, in vertical federation learning, decision trees and XGBoost are much more robust than logistic regression and neural networks.
AB - Federated learning (FL) is proposed to enable efficient machine learning while protecting the privacy of user data. In non-federated scenarios, machine learning models are vulnerable to adversarial attacks. The attacker can construct elaborate adversarial examples, which can greatly reduce the performance of the victim model. It is worth exploring how FL models differ from traditional models in face of complex adversarial attacks. We are also curious about the robustness of different FL modeling methods against adversarial attacks, providing suggestions for selecting suitable FL models in practical applications. Consequently, we perform adversarial example attacks on vertical federated learning and design experiments to study how vertical federated models perform under adversarial example attacks. The simulation results show that federated models are more robust than non-federated models. Besides, in vertical federation learning, decision trees and XGBoost are much more robust than logistic regression and neural networks.
KW - Adversarial Attack
KW - Federated Learning
KW - Privacy Preserving
UR - https://www.scopus.com/pages/publications/85168141868
U2 - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00150
DO - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00150
M3 - 会议稿件
AN - SCOPUS:85168141868
T3 - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
SP - 1016
EP - 1021
BT - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE SmartWorld, 19th IEEE International Conference on Ubiquitous Intelligence and Computing, 2022 IEEE International Conference on Autonomous and Trusted Vehicles Conference, 22nd IEEE International Conference on Scalable Computing and Communications, 2022 IEEE International Conference on Digital Twin, 8th IEEE International Conference on Privacy Computing and 2022 IEEE International Conference on Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
Y2 - 15 December 2022 through 18 December 2022
ER -