TY - GEN
T1 - Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation
AU - Xia, Jun
AU - Wang, Ting
AU - Ding, Jiepin
AU - Wei, Xian
AU - Chen, Mingsong
N1 - Publisher Copyright:
© 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoor triggers are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoors from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
AB - Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoor triggers are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoors from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
UR - https://www.scopus.com/pages/publications/85137914340
U2 - 10.24963/ijcai.2022/206
DO - 10.24963/ijcai.2022/206
M3 - 会议稿件
AN - SCOPUS:85137914340
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 1481
EP - 1487
BT - Proceedings of the 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
A2 - De Raedt, Luc
A2 - De Raedt, Luc
PB - International Joint Conferences on Artificial Intelligence
T2 - 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
Y2 - 23 July 2022 through 29 July 2022
ER -