An orthogonal classifier for improving the adversarial robustness of neural networks

  • Cong Xu
  • , Xiang Li
  • , Min Yang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Neural networks are susceptible to artificially designed adversarial perturbations. Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks. In this paper, we explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, thereby leading to a novel robust classifier. The proposed classifier avoids the undesired structural redundancy issue in previous work. Applying this classifier in standard training on clean data is sufficient to ensure the high accuracy and good robustness of the model. Moreover, when extra adversarial samples are used, better robustness can be further obtained with the help of a special worst-case loss. Experimental results show that our method is efficient and competitive to many state-of-the-art defensive approaches. Our code is available at https://github.com/MTandHJ/roboc.

Original languageEnglish
Pages (from-to)251-262
Number of pages12
JournalInformation Sciences
Volume591
DOIs
StatePublished - Apr 2022

Keywords

  • Adversarial robustness
  • Classification layer
  • Dense
  • Orthogonal

Fingerprint

Dive into the research topics of 'An orthogonal classifier for improving the adversarial robustness of neural networks'. Together they form a unique fingerprint.

Cite this