TY - JOUR
T1 - Robust Deep Convolutional Dictionary Model With Alignment Assistance for Multi-Contrast MRI Super-Resolution
AU - Lei, Pengcheng
AU - Zhang, Miaomiao
AU - Fang, Faming
AU - Zhang, Guixu
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance.
AB - Multi-contrast magnetic resonance imaging (MCMRI) super-resolution (SR) methods aims to leverage the complementary information present in multi-contrast images. However, existing methods encounter several limitations. Firstly, most current networks fail to appropriately model the correlations of multi-contrast images and lack certain interpretability. Secondly, they often overlook the negative impact of spatial misalignment between modalities in clinical practice. Thirdly, existing methods do not effectively constrain the complementary information learned between multi-contrast images, resulting in information redundancy and limiting their model performance. In this paper, we propose a robust alignment-assisted multi-contrast convolutional dictionary (A2-CDic) model to address these challenges. Specifically, we develop an observation model based on convolutional sparse coding to explicitly represent multi-contrast images as common (e.g., consistent textures) and unique (e.g., inconsistent structures and contrasts) components. Considering there are spatial misalignments in real-world multi-contrast images, we incorporate a spatial alignment module to compensate for the misaligned structures. This approach enables the proposed model to fully exploit the valuable information in the reference image while mitigating interference from inconsistent information. We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a multi-scale convolutional dictionary network. Furthermore, we utilize mutual information losses to constrain the extracted common and unique components. This constraint reduces the redundancy between the decomposed components, allowing each sub-module to learn more representative features. We evaluate our model on four publicly available datasets comprising internal, external, spatially aligned, and misaligned MCMRI images. The experimental results demonstrate that our model surpasses existing state-of-the-art MCMRI SR methods in terms of both generalization ability and overall performance.
KW - Multi-contrast MRI
KW - deep-unfolding
KW - image registration
KW - super-resolution
UR - https://www.scopus.com/pages/publications/105003501355
U2 - 10.1109/TMI.2025.3563523
DO - 10.1109/TMI.2025.3563523
M3 - 文章
C2 - 40266866
AN - SCOPUS:105003501355
SN - 0278-0062
VL - 44
SP - 3383
EP - 3396
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 8
ER -