TY - GEN
T1 - Embedding backdoors as the facial features
T2 - 2020 ACM Turing Celebration Conference - China, ACM TURC 2020
AU - He, Can
AU - Xue, Mingfu
AU - Wang, Jian
AU - Liu, Weiqiang
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/5/22
Y1 - 2020/5/22
N2 - Deep neural network (DNN) based face recognition systems have been widely applied in various identity authentication scenarios. However, recent studies show that the DNN models are vulnerable to backdoor attacks. An attacker can embed backdoors into the neural network by modifying its internal structure or poisoning the training set. In this way, the attacker can login into the system as the victim, while the normal use of the system by legitimate users will not be affected. However, the backdoors used in existing attacks are visually perceptible (black-frame glasses or purple sunglasses), which will arouse humans' suspicions thus lead to the failure of the attacks. In this paper, we propose a novel backdoor attack method, BHF2 (Backdoor Hidden as Facial Features), where the attacker can embed the backdoors as the inherent facial features. The proposed method can greatly enhance the concealment of the injected backdoor, which makes the backdoor attacks more difficult to be discovered. Besides, the BHF2 method can be launched under the black-box conditions, where the attacker is completely unaware of the target face recognition system. The proposed backdoor attack method can be applied in those rigorous identity authentication scenarios where the users are not allowed to wear any accessories. Experimental results show that the BHF2 method can achieve high attack success rate (up to 100%) on the state-of-the-art face recognition model, DeepID1, while the normal working performance of the system has hardly been affected (the recognition accuracy of the system has only dropped by 0.01% at the lowest).
AB - Deep neural network (DNN) based face recognition systems have been widely applied in various identity authentication scenarios. However, recent studies show that the DNN models are vulnerable to backdoor attacks. An attacker can embed backdoors into the neural network by modifying its internal structure or poisoning the training set. In this way, the attacker can login into the system as the victim, while the normal use of the system by legitimate users will not be affected. However, the backdoors used in existing attacks are visually perceptible (black-frame glasses or purple sunglasses), which will arouse humans' suspicions thus lead to the failure of the attacks. In this paper, we propose a novel backdoor attack method, BHF2 (Backdoor Hidden as Facial Features), where the attacker can embed the backdoors as the inherent facial features. The proposed method can greatly enhance the concealment of the injected backdoor, which makes the backdoor attacks more difficult to be discovered. Besides, the BHF2 method can be launched under the black-box conditions, where the attacker is completely unaware of the target face recognition system. The proposed backdoor attack method can be applied in those rigorous identity authentication scenarios where the users are not allowed to wear any accessories. Experimental results show that the BHF2 method can achieve high attack success rate (up to 100%) on the state-of-the-art face recognition model, DeepID1, while the normal working performance of the system has hardly been affected (the recognition accuracy of the system has only dropped by 0.01% at the lowest).
KW - Artificial intelligence security
KW - Backdoor attacks
KW - Deep learning
KW - Face recognition systems
UR - https://www.scopus.com/pages/publications/85095817023
U2 - 10.1145/3393527.3393567
DO - 10.1145/3393527.3393567
M3 - 会议稿件
AN - SCOPUS:85095817023
T3 - ACM International Conference Proceeding Series
SP - 231
EP - 235
BT - ACM TURC 2020 - Proceedings of ACM Turing Celebration Conference - China
PB - Association for Computing Machinery
Y2 - 21 May 2021 through 23 May 2021
ER -