Re-Thinking Data Availability Attacks Against Deep Neural Networks

Bin Fang, Bo Li*, Shuang Wu, Shouhong Ding, Ran Yi, Lizhuang Ma*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

The unauthorized use of personal data for commercial purposes and the covert acquisition of private data for training machine learning models continue to raise concerns. To address these issues, researchers have proposed availability attacks that aim to render data unexploitable. However, many availability attack methods can be easily disrupted by adversarial training. Although some robust methods can resist adversarial training, their protective effects are limited. In this paper, we re-examine the existing availability attack methods and propose a novel two-stage min-max-min optimization paradigm to generate robust unlearnable noise. The inner min stage is utilized to generate unlearnable noise, while the outer min-max stage simulates the training process of the poisoned model. Additionally, we formulate the attack effects and use it to constrain the optimization objective. Comprehensive experiments have revealed that the noise generated by our method can lead to a decline in test accuracy for adversarially trained poisoned models by up to approximately 30%, in comparison to SOTA methods.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PublisherIEEE Computer Society
Pages12215-12224
Number of pages10
ISBN (Electronic)9798350353006
DOIs
StatePublished - 2024
Externally publishedYes
Event2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States
Duration: 16 Jun 202422 Jun 2024

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

Conference2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Country/TerritoryUnited States
CitySeattle
Period16/06/2422/06/24

Keywords

  • Data Availability Attacks
  • Data Privacy
  • Unlearnable Examples

Fingerprint

Dive into the research topics of 'Re-Thinking Data Availability Attacks Against Deep Neural Networks'. Together they form a unique fingerprint.

Cite this