NLCMR: Indoor Depth Recovery Model With Non-Local Cross-Modality Prior

Junkang Zhang, Zhengkai Qi, Faming Fang, Tingting Wang, Guixu Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.

Original languageEnglish
Pages (from-to)265-276
Number of pages12
JournalIEEE Transactions on Computational Imaging
Volume11
DOIs
StatePublished - 2025

Keywords

  • Convolutional neural network
  • cross-modality prior
  • deep unrolling
  • depth recovery

Fingerprint

Dive into the research topics of 'NLCMR: Indoor Depth Recovery Model With Non-Local Cross-Modality Prior'. Together they form a unique fingerprint.

Cite this