When Adversarial Example Attacks Meet Vertical Federated Learning

  • Dan Meng*
  • , Zhihui Fu
  • , Chao Kong
  • , Yue Qi
  • , Guitao Cao*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Federated learning (FL) is proposed to enable efficient machine learning while protecting the privacy of user data. In non-federated scenarios, machine learning models are vulnerable to adversarial attacks. The attacker can construct elaborate adversarial examples, which can greatly reduce the performance of the victim model. It is worth exploring how FL models differ from traditional models in face of complex adversarial attacks. We are also curious about the robustness of different FL modeling methods against adversarial attacks, providing suggestions for selecting suitable FL models in practical applications. Consequently, we perform adversarial example attacks on vertical federated learning and design experiments to study how vertical federated models perform under adversarial example attacks. The simulation results show that federated models are more robust than non-federated models. Besides, in vertical federation learning, decision trees and XGBoost are much more robust than logistic regression and neural networks.

Original languageEnglish
Title of host publicationProceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1016-1021
Number of pages6
ISBN (Electronic)9798350346558
DOIs
StatePublished - 2022
Event2022 IEEE SmartWorld, 19th IEEE International Conference on Ubiquitous Intelligence and Computing, 2022 IEEE International Conference on Autonomous and Trusted Vehicles Conference, 22nd IEEE International Conference on Scalable Computing and Communications, 2022 IEEE International Conference on Digital Twin, 8th IEEE International Conference on Privacy Computing and 2022 IEEE International Conference on Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022 - Haikou, China
Duration: 15 Dec 202218 Dec 2022

Publication series

NameProceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022

Conference

Conference2022 IEEE SmartWorld, 19th IEEE International Conference on Ubiquitous Intelligence and Computing, 2022 IEEE International Conference on Autonomous and Trusted Vehicles Conference, 22nd IEEE International Conference on Scalable Computing and Communications, 2022 IEEE International Conference on Digital Twin, 8th IEEE International Conference on Privacy Computing and 2022 IEEE International Conference on Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
Country/TerritoryChina
CityHaikou
Period15/12/2218/12/22

Keywords

  • Adversarial Attack
  • Federated Learning
  • Privacy Preserving

Fingerprint

Dive into the research topics of 'When Adversarial Example Attacks Meet Vertical Federated Learning'. Together they form a unique fingerprint.

Cite this