Cross-Modal Prostate Cancer Segmentation via Self-Attention Distillation

  • Guokai Zhang
  • , Xiaoang Shen
  • , Yu Dong Zhang
  • , Ye Luo*
  • , Jihao Luo
  • , Dandan Zhu
  • , Hanmei Yang
  • , Weigang Wang
  • , Binghui Zhao*
  • , Jianwei Lu*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

The automatic and accurate segmentation of the prostate cancer from the multi-modal magnetic resonance images is of prime importance for the disease assessment and follow-up treatment plan. However, how to use the multi-modal image features more efficiently is still a challenging problem in the field of medical image segmentation. In this paper, we develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities, and the generated attention maps of different modalities enable the model to transfer significant and discriminative information that contains more details. Moreover, a novel spatial correlated feature fusion module is further employed for learning more complementary correlation and non-linear information of different modality images. We evaluate our model in five-fold cross-validation on 358 MRI images with biopsy confirmed. Without bells and whistles, our proposed network achieves state-of-the-art performance on extensive experiments.

Original languageEnglish
Pages (from-to)5298-5309
Number of pages12
JournalIEEE Journal of Biomedical and Health Informatics
Volume26
Issue number11
DOIs
StatePublished - 1 Nov 2022
Externally publishedYes

Keywords

  • Prostate cancer segmentation
  • cross-modal self-attention distillation
  • multi-modal learning
  • spatial correlated feature fusion

Fingerprint

Dive into the research topics of 'Cross-Modal Prostate Cancer Segmentation via Self-Attention Distillation'. Together they form a unique fingerprint.

Cite this