跳到主要导航 跳到搜索 跳到主要内容

Understanding Adversarial Robustness from Feature Maps of Convolutional Layers

科研成果: 期刊稿件文章同行评审

摘要

The adversarial robustness of a neural network mainly relies on two factors: model capacity and antiperturbation ability. In this article, we study the antiperturbation ability of the network from the feature maps of convolutional layers. Our theoretical analysis discovers that larger convolutional feature maps before average pooling can contribute to better resistance to perturbations, but the conclusion is not true for max pooling. It brings new inspiration to the design of robust neural networks and urges us to apply these findings to improve existing architectures. The proposed modifications are very simple and only require upsampling the inputs or slightly modifying the stride configurations of downsampling operators. We verify our approaches on several benchmark neural network architectures, including AlexNet, VGG, RestNet18, and PreActResNet18. Nontrivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms. The code is available at https://github.com/MTandHJ/rcm.

源语言英语
页(从-至)4690-4702
页数13
期刊IEEE Transactions on Neural Networks and Learning Systems
36
3
DOI
出版状态已出版 - 2025

指纹

探究 'Understanding Adversarial Robustness from Feature Maps of Convolutional Layers' 的科研主题。它们共同构成独一无二的指纹。

引用此