TY - JOUR
T1 - Cross-Modal Prostate Cancer Segmentation via Self-Attention Distillation
AU - Zhang, Guokai
AU - Shen, Xiaoang
AU - Zhang, Yu Dong
AU - Luo, Ye
AU - Luo, Jihao
AU - Zhu, Dandan
AU - Yang, Hanmei
AU - Wang, Weigang
AU - Zhao, Binghui
AU - Lu, Jianwei
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - The automatic and accurate segmentation of the prostate cancer from the multi-modal magnetic resonance images is of prime importance for the disease assessment and follow-up treatment plan. However, how to use the multi-modal image features more efficiently is still a challenging problem in the field of medical image segmentation. In this paper, we develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities, and the generated attention maps of different modalities enable the model to transfer significant and discriminative information that contains more details. Moreover, a novel spatial correlated feature fusion module is further employed for learning more complementary correlation and non-linear information of different modality images. We evaluate our model in five-fold cross-validation on 358 MRI images with biopsy confirmed. Without bells and whistles, our proposed network achieves state-of-the-art performance on extensive experiments.
AB - The automatic and accurate segmentation of the prostate cancer from the multi-modal magnetic resonance images is of prime importance for the disease assessment and follow-up treatment plan. However, how to use the multi-modal image features more efficiently is still a challenging problem in the field of medical image segmentation. In this paper, we develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities, and the generated attention maps of different modalities enable the model to transfer significant and discriminative information that contains more details. Moreover, a novel spatial correlated feature fusion module is further employed for learning more complementary correlation and non-linear information of different modality images. We evaluate our model in five-fold cross-validation on 358 MRI images with biopsy confirmed. Without bells and whistles, our proposed network achieves state-of-the-art performance on extensive experiments.
KW - Prostate cancer segmentation
KW - cross-modal self-attention distillation
KW - multi-modal learning
KW - spatial correlated feature fusion
UR - https://www.scopus.com/pages/publications/85119436895
U2 - 10.1109/JBHI.2021.3127688
DO - 10.1109/JBHI.2021.3127688
M3 - 文章
C2 - 34767517
AN - SCOPUS:85119436895
SN - 2168-2194
VL - 26
SP - 5298
EP - 5309
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
IS - 11
ER -