TY - GEN
T1 - Omni-Fusion of Spatial and Spectral for Hyperspectral Image Segmentation
AU - Zhang, Qing
AU - Pei, Guoquan
AU - Wang, Yan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026
Y1 - 2026
N2 - Medical Hyperspectral Imaging (MHSI) has emerged as a promising tool for enhanced disease diagnosis, particularly in computational pathology, offering rich spectral information that aids in identifying subtle biochemical properties of tissues. Despite these advantages, effectively fusing both spatial-dimensional and spectral-dimensional information from MHSIs remains challenging due to its high dimensionality and spectral redundancy inherent characteristics. To solve the above challenges, we propose a novel spatial-spectral omni-fusion network for hyperspectral image segmentation, named as Omni-Fuse. Here, we introduce abundant cross-dimensional feature fusion operations, including (1) a cross-dimensional enhancement module that refines both spatial and spectral features through bidirectional attention mechanisms; (2) a spectral-guided spatial query selection to select the most spectral-related spatial feature as the query; and (3) a two-stage cross-dimensional decoder which dynamically guide the model’s attention towards the selected spatial query. Despite of numerous attention blocks, Omni-Fuse remains efficient in execution. Experiments on two microscopic hyperspectral image datasets show that our approach can significantly improve the segmentation performance compared with the state-of-the-art methods, with over 5.73% improvement in DSC. Code available at: https://github.com/DeepMed-Lab-ECNU/Omni-Fuse.
AB - Medical Hyperspectral Imaging (MHSI) has emerged as a promising tool for enhanced disease diagnosis, particularly in computational pathology, offering rich spectral information that aids in identifying subtle biochemical properties of tissues. Despite these advantages, effectively fusing both spatial-dimensional and spectral-dimensional information from MHSIs remains challenging due to its high dimensionality and spectral redundancy inherent characteristics. To solve the above challenges, we propose a novel spatial-spectral omni-fusion network for hyperspectral image segmentation, named as Omni-Fuse. Here, we introduce abundant cross-dimensional feature fusion operations, including (1) a cross-dimensional enhancement module that refines both spatial and spectral features through bidirectional attention mechanisms; (2) a spectral-guided spatial query selection to select the most spectral-related spatial feature as the query; and (3) a two-stage cross-dimensional decoder which dynamically guide the model’s attention towards the selected spatial query. Despite of numerous attention blocks, Omni-Fuse remains efficient in execution. Experiments on two microscopic hyperspectral image datasets show that our approach can significantly improve the segmentation performance compared with the state-of-the-art methods, with over 5.73% improvement in DSC. Code available at: https://github.com/DeepMed-Lab-ECNU/Omni-Fuse.
UR - https://www.scopus.com/pages/publications/105017855025
U2 - 10.1007/978-3-032-04927-8_45
DO - 10.1007/978-3-032-04927-8_45
M3 - 会议稿件
AN - SCOPUS:105017855025
SN - 9783032049261
T3 - Lecture Notes in Computer Science
SP - 471
EP - 481
BT - Medical Image Computing and Computer Assisted Intervention, MICCAI 2025 - 28th International Conference, 2025, Proceedings
A2 - Gee, James C.
A2 - Hong, Jaesung
A2 - Sudre, Carole H.
A2 - Golland, Polina
A2 - Alexander, Daniel C.
A2 - Iglesias, Juan Eugenio
A2 - Venkataraman, Archana
A2 - Kim, Jong Hyo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2025
Y2 - 23 September 2025 through 27 September 2025
ER -