Abstract
In recent years, deep unfolding (DU) models have gained significant traction in multi-source image fusion applications, including remote sensing pansharpening and the integration of multispectral and hyperspectral images, owing to their enhanced interpretability. However, despite embedding physical degradation models, these DU models still rely on implicit learning for their priors, which can impede interpretability. In this paper, we present a novel image fusion model based on maximum a posterior (MAP) framework. This model enhances interpretability and fusion performance by incorporating two innovative prior modules. The first module employs codebook to capture the spectral traits of the target image, subsequently integrating these insights within our DU framework. The second module specializes in uncovering the intrinsic correlations between multi-source images, ensuring that the fusion process effectively leverages the rich, diverse information encoded within each modality. Specifically, we have effectively combined the self-similarity and cross-modal similarity paradigms into a unified cross image prior module. To counteract the redundancy present in images, we have integrated an agent mechanism that significantly simplifies the computational load of our prior module. Quantitative and qualitative experiments on multiple benchmark datasets demonstrate that the proposed method achieves more robustness and high-quality results than other state-of-the-art sharpening methods.
| Original language | English |
|---|---|
| Article number | 103172 |
| Journal | Information Fusion |
| Volume | 122 |
| DOIs | |
| State | Published - Oct 2025 |
Keywords
- Customized prior
- Deep unfolding
- Multispectral and hyperspectral image fusion
- Pansharpening