Robustness of Sketched Linear Classifiers to Adversarial Attacks

  • Ananth Mahadevan
  • , Arpit Merchant
  • , Yanhao Wang
  • , Michael Mathioudakis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Linear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrect labels for input data that are adversarially modified with small perturbations. However, this phenomenon has not been properly understood in the context of sketch-based linear classifiers, typically used in memory-constrained paradigms, which rely on random projections of the features for model compression. In this paper, we propose novel Fast-Gradient-Sign Method (FGSM) attacks for sketched classifiers in full, partial, and black-box information settings with regards to their internal parameters. We perform extensive experiments on the MNIST dataset to characterize their robustness as a function of perturbation budget. Our results suggest that, in the full-information setting, these classifiers are less accurate on unaltered input than their uncompressed counterparts but just as susceptible to adversarial attacks. But in more realistic partial and black-box information settings, sketching improves robustness while having lower memory footprint.

Original languageEnglish
Title of host publicationCIKM 2022 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management
PublisherAssociation for Computing Machinery
Pages4319-4323
Number of pages5
ISBN (Electronic)9781450392365
DOIs
StatePublished - 17 Oct 2022
Event31st ACM International Conference on Information and Knowledge Management, CIKM 2022 - Atlanta, United States
Duration: 17 Oct 202221 Oct 2022

Publication series

NameInternational Conference on Information and Knowledge Management, Proceedings
ISSN (Print)2155-0751

Conference

Conference31st ACM International Conference on Information and Knowledge Management, CIKM 2022
Country/TerritoryUnited States
CityAtlanta
Period17/10/2221/10/22

Keywords

  • adversarial machine learning
  • robustness
  • sketching

Fingerprint

Dive into the research topics of 'Robustness of Sketched Linear Classifiers to Adversarial Attacks'. Together they form a unique fingerprint.

Cite this