TY - GEN
T1 - High-Fidelity Dynamic Human Synthesis via UV-Guided NeRF with Sparse Views
AU - Xie, Zhifeng
AU - Wang, Zhaosheng
AU - Wang, Sen
AU - Sun, Yuzhou
AU - Ma, Lizhuang
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - In the field of dynamic human synthesis, some recent works try to decompose a non-rigidly deforming scene into a canonical neural radiance field and use a set of deformation fields for mapping observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images. Due to the highly under-constrained optimization cased by deformation field without prior and the insufficient of surface appearance information cased by sparse views, the rendering result exists obvious appearance artifacts. In this paper, to address the problem of artifacts, we present a novel method called UV-guided Neural Radiance Fields (UVNeRF), consisting of three modules: Canonical Space Mapping Module (CSMM), Texture Space Mapping Module (TSMM), UV-guided Neural Rendering Module (UVNRM). CSMM map observation-space points to the canonical space based 3D human skeletons which can regularize learning of the deformation field. TSMM map canonical-space points to the texture space for obtaining a rough human surface representation on the UV space as the extra information. UVNRM render the image result using the outputs of CSMM and TSMM. The experimental studies on Human3.6M and ZJU-MoCap dataset show that our approach gains noteworthy enhancements comparing recent dynamic human synthesis methods.
AB - In the field of dynamic human synthesis, some recent works try to decompose a non-rigidly deforming scene into a canonical neural radiance field and use a set of deformation fields for mapping observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images. Due to the highly under-constrained optimization cased by deformation field without prior and the insufficient of surface appearance information cased by sparse views, the rendering result exists obvious appearance artifacts. In this paper, to address the problem of artifacts, we present a novel method called UV-guided Neural Radiance Fields (UVNeRF), consisting of three modules: Canonical Space Mapping Module (CSMM), Texture Space Mapping Module (TSMM), UV-guided Neural Rendering Module (UVNRM). CSMM map observation-space points to the canonical space based 3D human skeletons which can regularize learning of the deformation field. TSMM map canonical-space points to the texture space for obtaining a rough human surface representation on the UV space as the extra information. UVNRM render the image result using the outputs of CSMM and TSMM. The experimental studies on Human3.6M and ZJU-MoCap dataset show that our approach gains noteworthy enhancements comparing recent dynamic human synthesis methods.
KW - Canonical space
KW - Human synthesis
KW - Neural radiance field
UR - https://www.scopus.com/pages/publications/85147986143
U2 - 10.1007/978-3-031-23473-6_28
DO - 10.1007/978-3-031-23473-6_28
M3 - 会议稿件
AN - SCOPUS:85147986143
SN - 9783031234729
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 357
EP - 368
BT - Advances in Computer Graphics - 39th Computer Graphics International Conference, CGI 2022, Proceedings
A2 - Magnenat-Thalmann, Nadia
A2 - Zhang, Jian
A2 - Kim, Jinman
A2 - Papagiannakis, George
A2 - Sheng, Bin
A2 - Thalmann, Daniel
A2 - Gavrilova, Marina
PB - Springer Science and Business Media Deutschland GmbH
T2 - 39th Computer Graphics International Conference on Advances in Computer Graphics, CGI 2022
Y2 - 12 September 2022 through 16 September 2022
ER -