Defense against Adversarial Attacks with an Induced Class

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Though deep neural networks have succeeded in various real applications, the prediction performance is significantly degraded when facing adversarial attacks. In this work, we investigate the alternation of the prediction distribution pattern under adversarial attacks and argue that such alternation is the primary reason for performance drop. To this end, we propose a simple yet effective method by introducing an induced class to attract the adversarial attack and thus protect the original classes' prediction order. Experiments on two real-world datasets demonstrate that the proposed method can maintain the prediction performance for both natural and adversarial examples.

Original languageEnglish
Title of host publicationIJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9780738133669
DOIs
StatePublished - 18 Jul 2021
Event2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Online, China
Duration: 18 Jul 202122 Jul 2021

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2021-July
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407

Conference

Conference2021 International Joint Conference on Neural Networks, IJCNN 2021
Country/TerritoryChina
CityVirtual, Online
Period18/07/2122/07/21

Keywords

  • Adversarial Attack
  • Deep neural network
  • Defense

Fingerprint

Dive into the research topics of 'Defense against Adversarial Attacks with an Induced Class'. Together they form a unique fingerprint.

Cite this