TY - GEN
T1 - A Generative Data Augmentation Trained by Low-quality Annotations for Cholangiocarcinoma Hyperspectral Image Segmentation
AU - Dai, Kaijie
AU - Zhou, Zehao
AU - Qiu, Song
AU - Wang, Yan
AU - Zhou, Mei
AU - Li, Mingshuai
AU - Li, Qingli
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Microscopic hyperspectral imaging technology combined with deep learning method emerges medical field recently as a multiplexed imaging technology. With the semantic segmentation of hyperspectral histopathological image of pathological tissue, doctors can quickly locate suspicious areas, diagnose and arrange treatment accurately and rapidly, reducing the workload of them. Cholangiocarcinoma is a rare and devastating disease with few hyperspectral histopathological data. Moreover, achieving high-quality annotations of hyperspectral histopathological image is challenging and costs time for pathologists, so generally, rough labels are annotated, but directly using the low-quality labels will reduce the performance of segmentation networks. So how to fully utilize few high-quality annotations and dozens of low-quality labels to enhance the segmentation performance of cholangiocarcinoma hyperspectral image remains to be resolved. In this paper, we proposed a two-stage hyperspectral segmentation deep learning framework based on Labels-to-Photo translation and Swin-Spec Transformer(L2P-SST). In stage-I, the OASIS generative network and the Swin-Spec Transformer discriminative network are used for adversarial training, and a spectral perceptual loss function is proposed to generate highquality hyperspectral images; in stage-II, parameters of the generative network is fixed and the generated hyperspectral images are used as data augmentation in the training of Swin-Spec Transformer segmentation network. The proposed framework achieved 76.16% mIoU(mean Intersection over Union), 85.80% mDice(mean Dice), 90.96% Accuracy and 71.65% Kappa coefficient in the semantic segmentation task of the Multidimensional Choledoch Database. Compared with other methods, the results demonstrate our framework provides a competitive segmentation performance.
AB - Microscopic hyperspectral imaging technology combined with deep learning method emerges medical field recently as a multiplexed imaging technology. With the semantic segmentation of hyperspectral histopathological image of pathological tissue, doctors can quickly locate suspicious areas, diagnose and arrange treatment accurately and rapidly, reducing the workload of them. Cholangiocarcinoma is a rare and devastating disease with few hyperspectral histopathological data. Moreover, achieving high-quality annotations of hyperspectral histopathological image is challenging and costs time for pathologists, so generally, rough labels are annotated, but directly using the low-quality labels will reduce the performance of segmentation networks. So how to fully utilize few high-quality annotations and dozens of low-quality labels to enhance the segmentation performance of cholangiocarcinoma hyperspectral image remains to be resolved. In this paper, we proposed a two-stage hyperspectral segmentation deep learning framework based on Labels-to-Photo translation and Swin-Spec Transformer(L2P-SST). In stage-I, the OASIS generative network and the Swin-Spec Transformer discriminative network are used for adversarial training, and a spectral perceptual loss function is proposed to generate highquality hyperspectral images; in stage-II, parameters of the generative network is fixed and the generated hyperspectral images are used as data augmentation in the training of Swin-Spec Transformer segmentation network. The proposed framework achieved 76.16% mIoU(mean Intersection over Union), 85.80% mDice(mean Dice), 90.96% Accuracy and 71.65% Kappa coefficient in the semantic segmentation task of the Multidimensional Choledoch Database. Compared with other methods, the results demonstrate our framework provides a competitive segmentation performance.
KW - Image generation
KW - Microscopic hyperspectral imaging
KW - Semantic segmentation
KW - Transformer
UR - https://www.scopus.com/pages/publications/85169547917
U2 - 10.1109/IJCNN54540.2023.10191749
DO - 10.1109/IJCNN54540.2023.10191749
M3 - 会议稿件
AN - SCOPUS:85169547917
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
Y2 - 18 June 2023 through 23 June 2023
ER -