TY - GEN
T1 - Sample-Specific Backdoor based Active Intellectual Property Protection for Deep Neural Networks
AU - Wu, Yinghao
AU - Xue, Mingfu
AU - Gu, Dujuan
AU - Zhang, Yushu
AU - Liu, Weiqiang
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recently, a number of researches have been proposed to protect the intellectual property (IP) of Deep Neural Network (DNN) models. However, most existing works are passive protection methods as they attempt to extract watermark from the pirated model after piracy occurs. In this paper, we propose an active IP protection method for DNN in which we utilize a variant of sample-specific backdoor attack to implement active authorization control for DNN models. During training, we mislabel all the clean images and keep the labels of backdoor instances as their ground-truth labels. Different from general backdoor trigger, we train a U-Net model to generate sample-specific trigger. This kind of trigger is sample-specific and invisible, which works as the secret key for each image and is hard to be noticed. Moreover, compared with existing active DNN IP protection methods, the proposed method can be applied in the black-box scenario. Experimental results on ImageNet and YouTube Aligned Face datasets demonstrate the effectiveness and robustness of the proposed method.
AB - Recently, a number of researches have been proposed to protect the intellectual property (IP) of Deep Neural Network (DNN) models. However, most existing works are passive protection methods as they attempt to extract watermark from the pirated model after piracy occurs. In this paper, we propose an active IP protection method for DNN in which we utilize a variant of sample-specific backdoor attack to implement active authorization control for DNN models. During training, we mislabel all the clean images and keep the labels of backdoor instances as their ground-truth labels. Different from general backdoor trigger, we train a U-Net model to generate sample-specific trigger. This kind of trigger is sample-specific and invisible, which works as the secret key for each image and is hard to be noticed. Moreover, compared with existing active DNN IP protection methods, the proposed method can be applied in the black-box scenario. Experimental results on ImageNet and YouTube Aligned Face datasets demonstrate the effectiveness and robustness of the proposed method.
KW - Active Authorization Control
KW - Backdoor Attack
KW - Deep Neural Network
KW - Intellectual Property Protection
KW - Sample-Specific Trigger
UR - https://www.scopus.com/pages/publications/85139062639
U2 - 10.1109/AICAS54282.2022.9869927
DO - 10.1109/AICAS54282.2022.9869927
M3 - 会议稿件
AN - SCOPUS:85139062639
T3 - Proceeding - IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
SP - 316
EP - 319
BT - Proceeding - IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
Y2 - 13 June 2022 through 15 June 2022
ER -