Abstract
Deep Neural Network is an application of Big Data, and the robustness of Big Data is one of the most important issues. This paper proposes a new approach named PCD for computing adversarial examples for Deep Neural Network (DNN) and increase the robustness of Big Data. In safety-critical applications, adversarial examples are big threats to the reliability of DNNs. PCD generates adversarial examples by generating different coverage of pooling functions using gradient ascent. Among the 2707 input images, PCD generates 672 adversarial examples with L∞ distances less than 0.3. Comparing to PGD (state-of-art tool for generating adversarial examples with distances less than 0.3), PCD finds 1.5 times more adversarial examples than PGD (449) does.
| Original language | English |
|---|---|
| Pages (from-to) | 4615-4620 |
| Number of pages | 6 |
| Journal | Journal of Intelligent and Fuzzy Systems |
| Volume | 37 |
| Issue number | 4 |
| DOIs | |
| State | Published - 2019 |
Keywords
- Deep neural network
- big data
- coverage
- robustness