基于 Trans-nightSeg 的夜间道路场景语义分割方法

Translated title of the contribution: Semantic segmentation method on nighttime road scene based on Trans-nightSeg

Canlin Li, Wenjiao Zhang, Zhiwen Shao, Lizhuang Ma, Xinyue Wang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The semantic segmentation method Trans-nightSeg was proposed aiming at the issues of low brightness and lack of annotated semantic segmentation dataset in nighttime road scenes. The annotated daytime road scene semantic segmentation dataset Cityscapes was converted into low-light road scene images by TransCartoonGAN, which shared the same semantic segmentation annotation, thereby enriching the nighttime road scene dataset. The result together with the real road scene dataset was used as input of N-Refinenet. The N-Refinenet network introduced a low-light image adaptive enhancement network to improve the semantic segmentation performance of the nighttime road scene. Depth-separable convolution was used instead of normal convolution in order to reduce the computational complexity. The experimental results show that the mean intersection over union (mIoU) of the proposed algorithm on the Dark Zurich-test dataset and Nighttime Driving-test dataset reaches 56.0% and 56.6%, respectively, outperforming other semantic segmentation algorithms for nighttime road scene.

Translated title of the contributionSemantic segmentation method on nighttime road scene based on Trans-nightSeg
Original languageChinese (Traditional)
Pages (from-to)294-303
Number of pages10
JournalZhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science)
Volume58
Issue number2
DOIs
StatePublished - Feb 2024
Externally publishedYes

Fingerprint

Dive into the research topics of 'Semantic segmentation method on nighttime road scene based on Trans-nightSeg'. Together they form a unique fingerprint.

Cite this