EANet: Edge-aware network for the extraction of buildings from aerial images

  • Guang Yang
  • , Qian Zhang*
  • , Guixu Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

Deep learning methods have been used to extract buildings from remote sensing images and have achieved state-of-the-art performance. Most previous work has emphasized the multiscale fusion of features or the enhancement of more receptive fields to achieve global features rather than focusing on low-level details such as the edges. In this work, we propose a novel end-to-end edge-aware network, the EANet, and an edge-aware loss for getting accurate buildings from aerial images. Specifically, the architecture is composed of image segmentation networks and edge perception networks that, respectively, take charge of building prediction and edge investigation. The International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam segmentation benchmark and the Wuhan University (WHU) building benchmark were used to evaluate our approach, which, respectively, was found to achieve 90.19% and 93.33% intersection-over-union and top performance without using additional datasets, data augmentation, and post-processing. The EANet is effective in extracting buildings from aerial images, which shows that the quality of image segmentation can be improved by focusing on edge details.

Original languageEnglish
Article number2161
JournalRemote Sensing
Volume12
Issue number13
DOIs
StatePublished - 1 Jul 2020

Keywords

  • Building extraction
  • Convolutional neural networks
  • Edge
  • Multitask learning
  • Semantic segmentation

Fingerprint

Dive into the research topics of 'EANet: Edge-aware network for the extraction of buildings from aerial images'. Together they form a unique fingerprint.

Cite this