TY - GEN
T1 - mm3DFace
T2 - 21st ACM International Conference on Mobile Systems, Applications, and Services, MobiSys 2023
AU - Xie, Jiahong
AU - Kong, Hao
AU - Yu, Jiadi
AU - Chen, Yingying
AU - Kong, Linghe
AU - Zhu, Yanmin
AU - Tang, Feilong
N1 - Publisher Copyright:
© 2023 Owner/Author(s).
PY - 2023/6/18
Y1 - 2023/6/18
N2 - Recent years have witnessed the emerging market of 3D facial reconstruction that supports numerous face-driven scenarios including modeling in virtual reality (VR), human-computer interaction, and affective computing applications. Current mainstream approaches rely on vision for 3D facial reconstruction, which may encounter privacy concerns and suffer from obstruction scenes and bad lighting conditions. In this paper, we present a nonintrusive 3D facial reconstruction system, mm3DFace, which leverages a millimeter wave (mmWave) radar to reconstruct 3D human faces that continuously express facial expressions in a privacy-preserving and passive manner. Based on the pre-processed mmWave signals, mm3DFace first extracts facial geometric features that capture subtle changes in facial expressions through a ConvNeXt model with triple loss embedding. Then, mm3DFace derives distance and orientation-robust facial shapes with 68 facial landmarks using region-divided affine transformation. mm3DFace next reconstructs facial expressions through a designed regional amplification method and finally generates 3D facial avatars that continuously express facial expressions. Extensive experiments involving 15 participants in real-world environments show that mm3DFace can accurately track 68 facial landmarks with 3.94% normalized mean error, 2.30mm mean absolute error, and 4.10mm 3D-mean absolute error, which is effective and practical in real-world 3D facial reconstruction.
AB - Recent years have witnessed the emerging market of 3D facial reconstruction that supports numerous face-driven scenarios including modeling in virtual reality (VR), human-computer interaction, and affective computing applications. Current mainstream approaches rely on vision for 3D facial reconstruction, which may encounter privacy concerns and suffer from obstruction scenes and bad lighting conditions. In this paper, we present a nonintrusive 3D facial reconstruction system, mm3DFace, which leverages a millimeter wave (mmWave) radar to reconstruct 3D human faces that continuously express facial expressions in a privacy-preserving and passive manner. Based on the pre-processed mmWave signals, mm3DFace first extracts facial geometric features that capture subtle changes in facial expressions through a ConvNeXt model with triple loss embedding. Then, mm3DFace derives distance and orientation-robust facial shapes with 68 facial landmarks using region-divided affine transformation. mm3DFace next reconstructs facial expressions through a designed regional amplification method and finally generates 3D facial avatars that continuously express facial expressions. Extensive experiments involving 15 participants in real-world environments show that mm3DFace can accurately track 68 facial landmarks with 3.94% normalized mean error, 2.30mm mean absolute error, and 4.10mm 3D-mean absolute error, which is effective and practical in real-world 3D facial reconstruction.
KW - deep learning
KW - facial reconstruction
KW - mmWave
KW - mobile sensing
UR - https://www.scopus.com/pages/publications/85169414084
U2 - 10.1145/3581791.3596839
DO - 10.1145/3581791.3596839
M3 - 会议稿件
AN - SCOPUS:85169414084
T3 - MobiSys 2023 - Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services
SP - 462
EP - 474
BT - MobiSys 2023 - Proceedings of the 21st ACM International Conference on Mobile Systems, Applications and Services
PB - Association for Computing Machinery, Inc
Y2 - 18 June 2023 through 22 June 2023
ER -