Adversarial Example Defense via Perturbation Grading Strategy

Shaowei Zhu, Wanli Lyu, Bin Li, Zhaoxia Yin, Bin Luo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.

Original languageEnglish
Title of host publicationDigital Multimedia Communications - The 9th International Forum, IFTC 2022, Revised Selected Papers
EditorsGuangtao Zhai, Jun Zhou, Hua Yang, Xiaokang Yang, Jia Wang, Ping An
PublisherSpringer Science and Business Media Deutschland GmbH
Pages407-420
Number of pages14
ISBN (Print)9789819908554
DOIs
StatePublished - 2023
Event9th International Forum on Digital Multimedia Communication, IFTC 2022 - Shanghai, China
Duration: 9 Dec 20229 Dec 2022

Publication series

NameCommunications in Computer and Information Science
Volume1766 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference9th International Forum on Digital Multimedia Communication, IFTC 2022
Country/TerritoryChina
CityShanghai
Period9/12/229/12/22

Keywords

  • Adversarial defense
  • Adversarial examples
  • Deep Neural Network
  • Image denoising
  • JPEG compression

Fingerprint

Dive into the research topics of 'Adversarial Example Defense via Perturbation Grading Strategy'. Together they form a unique fingerprint.

Cite this