Adjustable super-resolution network via deep supervised learning and progressive self-distillation

Juncheng Li, Faming Fang, Tieyong Zeng, Guixu Zhang, Xizhao Wang

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

With the use of convolutional neural networks, Single-Image Super-Resolution (SISR) has advanced dramatically in recent years. However, we notice a phenomenon that the structure of all these models must be consistent during training and testing. This severely limits the flexibility of the model, making the same model difficult to be deployed on different sizes of platforms (e.g., computers, smartphones, and embedded devices). Therefore, it is crucial to develop a model that can adapt to different needs without retraining. To achieve this, we propose a lightweight Adjustable Super-Resolution Network (ASRN). Specifically, ASRN consists of a series of Multi-scale Aggregation Blocks (MABs), which is a lightweight and efficient module specially designed for feature extraction. Meanwhile, the Deep Supervised Learning (DSL) strategy is introduced into the model to guarantee the performance of each sub-network and a novel Progressive Self-Distillation (PSD) strategy is proposed to further improve the intermediate results of the model. With the help of DSL and PSD strategies, ASRN can achieve elastic image reconstruction. Meanwhile, ASRN is the first elastic SISR model, which shows good results after directly changing the model size without retraining.

Original languageEnglish
Pages (from-to)379-393
Number of pages15
JournalNeurocomputing
Volume500
DOIs
StatePublished - 21 Aug 2022

Keywords

  • Deep supervised learning
  • Elastic image reconstruction
  • Progressive self-distillation
  • SISR
  • Single-image super-resolution

Fingerprint

Dive into the research topics of 'Adjustable super-resolution network via deep supervised learning and progressive self-distillation'. Together they form a unique fingerprint.

Cite this