Generating adversarial examples for DNN using pooling layers

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Deep Neural Network is an application of Big Data, and the robustness of Big Data is one of the most important issues. This paper proposes a new approach named PCD for computing adversarial examples for Deep Neural Network (DNN) and increase the robustness of Big Data. In safety-critical applications, adversarial examples are big threats to the reliability of DNNs. PCD generates adversarial examples by generating different coverage of pooling functions using gradient ascent. Among the 2707 input images, PCD generates 672 adversarial examples with L∞ distances less than 0.3. Comparing to PGD (state-of-art tool for generating adversarial examples with distances less than 0.3), PCD finds 1.5 times more adversarial examples than PGD (449) does.

Original languageEnglish
Pages (from-to)4615-4620
Number of pages6
JournalJournal of Intelligent and Fuzzy Systems
Volume37
Issue number4
DOIs
StatePublished - 2019

Keywords

  • Deep neural network
  • big data
  • coverage
  • robustness

Fingerprint

Dive into the research topics of 'Generating adversarial examples for DNN using pooling layers'. Together they form a unique fingerprint.

Cite this