TY - GEN
T1 - Attack-Guided Efficient Robustness Verification of ReLU Neural Networks
AU - Zhu, Yiwei
AU - Wang, Feng
AU - Wan, Wenjie
AU - Zhang, Min
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - Nowadays the robustness of Deep Neural Networks (DNN) is gaining much more attention than ever. That is because DNNs are intensively adopted in safety-critical AI-enabled applications such as autonomous driving and authentication control. Formal methods have been proved to be effective to provide provable guarantee to the robustness of DNNs. However, they are suffering from bad scalability due to intrinsic high computational complexity of the verification problem. In this paper, we propose a novel attack-guided approach for efficiently verifying the robustness of neural networks. The novelty of our approach is that we use existing attack approaches to generate coarse adversarial examples, by which we can significantly simply final verification problem. In particular, we are focused on the neural networks that take ReLU activation functions, which are widely adopted for solving classification problems. The experimental results show that our approach outperforms those verification tools based on constraint solving by up to 69 times speedup, while it can compute minimum adversarial examples. The improvement is particularly significant on those adversarially trained networks.
AB - Nowadays the robustness of Deep Neural Networks (DNN) is gaining much more attention than ever. That is because DNNs are intensively adopted in safety-critical AI-enabled applications such as autonomous driving and authentication control. Formal methods have been proved to be effective to provide provable guarantee to the robustness of DNNs. However, they are suffering from bad scalability due to intrinsic high computational complexity of the verification problem. In this paper, we propose a novel attack-guided approach for efficiently verifying the robustness of neural networks. The novelty of our approach is that we use existing attack approaches to generate coarse adversarial examples, by which we can significantly simply final verification problem. In particular, we are focused on the neural networks that take ReLU activation functions, which are widely adopted for solving classification problems. The experimental results show that our approach outperforms those verification tools based on constraint solving by up to 69 times speedup, while it can compute minimum adversarial examples. The improvement is particularly significant on those adversarially trained networks.
UR - https://www.scopus.com/pages/publications/85116430906
U2 - 10.1109/IJCNN52387.2021.9534410
DO - 10.1109/IJCNN52387.2021.9534410
M3 - 会议稿件
AN - SCOPUS:85116430906
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
Y2 - 18 July 2021 through 22 July 2021
ER -