TY - GEN
T1 - Generative Data Augmentation with Liveness Information Preserving for Face Anti-Spoofing
AU - Chen, Changgu
AU - Li, Yang
AU - Zhang, Jian
AU - Liu, Jiali
AU - Wang, Changbo
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/6/7
Y1 - 2024/6/7
N2 - Face anti-spoofing is a critical aspect of ensuring security in the context of human-robot interaction and collaboration. Recently, disentangled-based data augmentation methods have achieved great success in face anti-spoofing tasks. The underlying assumption of those methods is that the liveness information could be completely disentangled and the labeling of the augmented data could totally depend on the liveness-related feature branch. However, we observe that it is almost impossible to extract the liveness-related information completely, which makes the current labeling strategy inaccurate. In this paper, we rethink the disentangling process and propose a novel generative-based data augmentation framework without forcing liveness information encoded into any specific feature space. Specifically, the original images are decomposed into statistic feature space and spatial feature space with liveness information preserving. With these two feature spaces, synthesized liveness-preserving images are generated with the Cartesian product to further approach the distribution of real face anti-spoofing data. Along with the original samplings, the augmented data are fed to a ResNet-based classifier with our proposed pseudo-label strategy for liveness information augmentation. Both qualitative and quantitative experiments demonstrate promising results to show the effectiveness of our proposed method.
AB - Face anti-spoofing is a critical aspect of ensuring security in the context of human-robot interaction and collaboration. Recently, disentangled-based data augmentation methods have achieved great success in face anti-spoofing tasks. The underlying assumption of those methods is that the liveness information could be completely disentangled and the labeling of the augmented data could totally depend on the liveness-related feature branch. However, we observe that it is almost impossible to extract the liveness-related information completely, which makes the current labeling strategy inaccurate. In this paper, we rethink the disentangling process and propose a novel generative-based data augmentation framework without forcing liveness information encoded into any specific feature space. Specifically, the original images are decomposed into statistic feature space and spatial feature space with liveness information preserving. With these two feature spaces, synthesized liveness-preserving images are generated with the Cartesian product to further approach the distribution of real face anti-spoofing data. Along with the original samplings, the augmented data are fed to a ResNet-based classifier with our proposed pseudo-label strategy for liveness information augmentation. Both qualitative and quantitative experiments demonstrate promising results to show the effectiveness of our proposed method.
KW - Computer Vision
KW - Facial Security
KW - Human-Computer Interaction
UR - https://www.scopus.com/pages/publications/85199132328
U2 - 10.1145/3652583.3658078
DO - 10.1145/3652583.3658078
M3 - 会议稿件
AN - SCOPUS:85199132328
T3 - ICMR 2024 - Proceedings of the 2024 International Conference on Multimedia Retrieval
SP - 302
EP - 310
BT - ICMR 2024-Proceedings of the 14th Annual ACM International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
T2 - 14th Annual ACM International Conference on Multimedia Retrieval, ICMR 2024
Y2 - 10 June 2024 through 14 June 2024
ER -