跳到主要导航 跳到搜索 跳到主要内容

Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-Resolution

  • Jiao Xie
  • , Linrui Gong
  • , Shitong Shao
  • , Shaohui Lin
  • , Linkai Luo*
  • *此作品的通讯作者
  • Xiamen University
  • East China Normal University
  • MOE

科研成果: 期刊稿件文章同行评审

摘要

Convolutional and Transformer models have achieved remarkable results for Single Image Super-Resolution (SISR). However, the tremendous memory and computation consumption of these models restricts their usage in resource-limited scenarios. Knowledge distillation, as an effective model compression technique, has received great research focus on the SISR task. In this paper, we propose a novel efficient SISR method via hybrid knowledge distillation from intermediate layers, termed HKDSR, which leverages the knowledge from frequency information into that RGB information. To accomplish this, we first pre-train the teacher with multiple intermediate upsampling layers to generate the intermediate SR outputs. We then construct two kinds of intermediate knowledge from the Frequency Similarity Matrix (FSM) and Adaptive Channel Fusion (ACF). FSM aims to mine the relationship of frequency similarity between the Ground-truth (GT) HR image, and the intermediate SR outputs of teacher and student by Discrete Wavelet Transformation. ACF merges the intermediate SR output of the teacher and GT HR image in a channel dimension to adaptively align the intermediate SR output of the student. Finally, we leverage the knowledge from FSM and ACF into reconstruction loss to effectively improve student performance. Extensive experiments demonstrate the effectiveness of HKDSR on different benchmark datasets and network architectures.

源语言英语
文章编号126592
期刊Neurocomputing
554
DOI
出版状态已出版 - 14 10月 2023

指纹

探究 'Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-Resolution' 的科研主题。它们共同构成独一无二的指纹。

引用此