TY - JOUR
T1 - NLCMR
T2 - Indoor Depth Recovery Model With Non-Local Cross-Modality Prior
AU - Zhang, Junkang
AU - Qi, Zhengkai
AU - Fang, Faming
AU - Wang, Tingting
AU - Zhang, Guixu
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.
AB - Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.
KW - Convolutional neural network
KW - cross-modality prior
KW - deep unrolling
KW - depth recovery
UR - https://www.scopus.com/pages/publications/105001087259
U2 - 10.1109/TCI.2025.3545358
DO - 10.1109/TCI.2025.3545358
M3 - 文章
AN - SCOPUS:105001087259
SN - 2333-9403
VL - 11
SP - 265
EP - 276
JO - IEEE Transactions on Computational Imaging
JF - IEEE Transactions on Computational Imaging
ER -