HandNet: Occlusion-robust 3D hand mesh reconstruction with prior information

Jiawen Li, Fei Jiang*, Dandan Zhu, Aimin Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

3D hand mesh reconstruction from a single RGB image is crucial for numerous applications yet challenging due to extensive occlusions. Interestingly, humans can infer plausible 3D hand shapes even under heavy occlusion by reasoning about full hand structures based on prior anatomical knowledge and contextual cues. Inspired by this cognitive process, we propose HandNet, a novel framework for 3D hand mesh reconstruction that explicitly utilizes both hand anatomy and contextual information to infer occluded structures. First, we introduce a dynamic relation modeling module that employs a graph-based representation of hand anatomy, capturing local skeletal topology and global contextual dependencies under anatomical constraints and adaptive correlations. Second, we design a cross-representation integration module that enables deep interaction between visual cues and structural priors, aligning shared features and promoting consistent hand representations. Extensive experiments on DexYCB, HO3D v2, and HO3D v3 datasets which contain challenging hand-object occlusions, demonstrating that our HandNet achieves state-of-the-art performance.

Original languageEnglish
Article number114868
JournalKnowledge-Based Systems
Volume332
DOIs
StatePublished - 15 Dec 2025

Keywords

  • 3D hand mesh reconstruction
  • Cross-modal feature integration
  • Prior guided learning

Fingerprint

Dive into the research topics of 'HandNet: Occlusion-robust 3D hand mesh reconstruction with prior information'. Together they form a unique fingerprint.

Cite this