TY - GEN
T1 - Backdoor Defense with Machine Unlearning
AU - Liu, Yang
AU - Fan, Mingyuan
AU - Chen, Cen
AU - Liu, Ximeng
AU - Ma, Zhuo
AU - Wang, Li
AU - Ma, Jianfeng
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against the attack. In this paper, we propose BAERASER, a novel method that can erase the backdoor injected into the victim model through machine unlearning. Specifically, BAERASER mainly implements backdoor defense in two key steps. First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model. Here, the trigger pattern recovery problem is equivalent to the one of extracting an unknown noise distribution from the victim model, which can be easily resolved by the entropy maximization based generative model. Subsequently, BAERASER leverages these recovered trigger patterns to reverse the backdoor injection procedure and induce the victim model to erase the polluted memories through a newly designed gradient ascent based machine unlearning method. Compared with the previous machine unlearning solutions, the proposed approach gets rid of the reliance on the full access to training data for retraining and shows higher effectiveness on backdoor erasing than existing fine-tuning or pruning methods. Moreover, experiments show that BAERASER can averagely lower the attack success rates of three kinds of state-of-the-art backdoor attacks by 99% on four benchmark datasets.
AB - Backdoor injection attack is an emerging threat to the security of neural networks, however, there still exist limited effective defense methods against the attack. In this paper, we propose BAERASER, a novel method that can erase the backdoor injected into the victim model through machine unlearning. Specifically, BAERASER mainly implements backdoor defense in two key steps. First, trigger pattern recovery is conducted to extract the trigger patterns infected by the victim model. Here, the trigger pattern recovery problem is equivalent to the one of extracting an unknown noise distribution from the victim model, which can be easily resolved by the entropy maximization based generative model. Subsequently, BAERASER leverages these recovered trigger patterns to reverse the backdoor injection procedure and induce the victim model to erase the polluted memories through a newly designed gradient ascent based machine unlearning method. Compared with the previous machine unlearning solutions, the proposed approach gets rid of the reliance on the full access to training data for retraining and shows higher effectiveness on backdoor erasing than existing fine-tuning or pruning methods. Moreover, experiments show that BAERASER can averagely lower the attack success rates of three kinds of state-of-the-art backdoor attacks by 99% on four benchmark datasets.
KW - Backdoor Defense
KW - Machine Unlearning
KW - Trigger Pattern Recovery
UR - https://www.scopus.com/pages/publications/85133299262
U2 - 10.1109/INFOCOM48880.2022.9796974
DO - 10.1109/INFOCOM48880.2022.9796974
M3 - 会议稿件
AN - SCOPUS:85133299262
T3 - Proceedings - IEEE INFOCOM
SP - 280
EP - 289
BT - INFOCOM 2022 - IEEE Conference on Computer Communications
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 41st IEEE Conference on Computer Communications, INFOCOM 2022
Y2 - 2 May 2022 through 5 May 2022
ER -