Superdense-scale network for semantic segmentation

  • Zhiqiang Li
  • , Jie Jiang
  • , Xi Chen*
  • , Honggang Qi
  • , Qingli Li
  • , Jiapeng Liu
  • , Laiwen Zheng
  • , Min Liu
  • , Yundong Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Great progress has been made in semantic segmentation based on deep convolutional neural networks. However, semantic segmentation in complex scenes remains challenging due to the large-scale variation problem. To handle this problem, the existing methods usually employ multiple receptive fields to capture multiscale features. Some works have verified that the denser the different receptive fields (scales), the easier it is to address the large-scale variation problem. To make denser scales, we propose a superdense-scale network (SDSNet). Specifically, we design a simple yet effective structure named the parallel-serial structure of atrous convolutions (PSSAC) in which superdense-scale high-level features are captured by explicitly adjusting the neuron's receptive field. The PSSAC is an improvement over ASPP and DenseASPP by employing exponentially increasing scales with a serially connected multiple parallel structure. To extract more accurate features, we construct an SDSNet consisting of a modified aligned Xcepiton71 backbone followed by a PSSAC. Extensive experiments of semantic segmentation are conducted to evaluate our SDSNet on three datasets, namely, Cityscapes, PASCAL VOC 2012, and ADE20K. Experimental results show that our SDSNet achieves state-of-the-art performance.

Original languageEnglish
Pages (from-to)30-41
Number of pages12
JournalNeurocomputing
Volume504
DOIs
StatePublished - 14 Sep 2022

Keywords

  • Atrous convolution
  • DCNN
  • Deep learning
  • Semantic segmentation

Fingerprint

Dive into the research topics of 'Superdense-scale network for semantic segmentation'. Together they form a unique fingerprint.

Cite this