TY - JOUR
T1 - Adjustable super-resolution network via deep supervised learning and progressive self-distillation
AU - Li, Juncheng
AU - Fang, Faming
AU - Zeng, Tieyong
AU - Zhang, Guixu
AU - Wang, Xizhao
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/8/21
Y1 - 2022/8/21
N2 - With the use of convolutional neural networks, Single-Image Super-Resolution (SISR) has advanced dramatically in recent years. However, we notice a phenomenon that the structure of all these models must be consistent during training and testing. This severely limits the flexibility of the model, making the same model difficult to be deployed on different sizes of platforms (e.g., computers, smartphones, and embedded devices). Therefore, it is crucial to develop a model that can adapt to different needs without retraining. To achieve this, we propose a lightweight Adjustable Super-Resolution Network (ASRN). Specifically, ASRN consists of a series of Multi-scale Aggregation Blocks (MABs), which is a lightweight and efficient module specially designed for feature extraction. Meanwhile, the Deep Supervised Learning (DSL) strategy is introduced into the model to guarantee the performance of each sub-network and a novel Progressive Self-Distillation (PSD) strategy is proposed to further improve the intermediate results of the model. With the help of DSL and PSD strategies, ASRN can achieve elastic image reconstruction. Meanwhile, ASRN is the first elastic SISR model, which shows good results after directly changing the model size without retraining.
AB - With the use of convolutional neural networks, Single-Image Super-Resolution (SISR) has advanced dramatically in recent years. However, we notice a phenomenon that the structure of all these models must be consistent during training and testing. This severely limits the flexibility of the model, making the same model difficult to be deployed on different sizes of platforms (e.g., computers, smartphones, and embedded devices). Therefore, it is crucial to develop a model that can adapt to different needs without retraining. To achieve this, we propose a lightweight Adjustable Super-Resolution Network (ASRN). Specifically, ASRN consists of a series of Multi-scale Aggregation Blocks (MABs), which is a lightweight and efficient module specially designed for feature extraction. Meanwhile, the Deep Supervised Learning (DSL) strategy is introduced into the model to guarantee the performance of each sub-network and a novel Progressive Self-Distillation (PSD) strategy is proposed to further improve the intermediate results of the model. With the help of DSL and PSD strategies, ASRN can achieve elastic image reconstruction. Meanwhile, ASRN is the first elastic SISR model, which shows good results after directly changing the model size without retraining.
KW - Deep supervised learning
KW - Elastic image reconstruction
KW - Progressive self-distillation
KW - SISR
KW - Single-image super-resolution
UR - https://www.scopus.com/pages/publications/85131219074
U2 - 10.1016/j.neucom.2022.05.061
DO - 10.1016/j.neucom.2022.05.061
M3 - 文章
AN - SCOPUS:85131219074
SN - 0925-2312
VL - 500
SP - 379
EP - 393
JO - Neurocomputing
JF - Neurocomputing
ER -