KN OW LEDG E DISTILLATIO N W ITH MU LTI-G RAN U LA RITY MIX TU RE OF PRIORS FOR IM AG E SUPER-RESOLUTION

  • Simiao Li
  • , Yun Zhang
  • , Wei Li
  • , Hanting Chen
  • , Wenjia Wang
  • , Bingyi Jing
  • , Shaohui Lin*
  • , Jie Hu*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Knowledge distillation (KD) is a promising yet challenging model compression approach that transmits rich learning representations from robust but resource-demanding teacher models to efficient student models. Previous methods for image super-resolution (SR) are often tailored to specific teacher-student architectures, limiting their potential for improvement and hindering broader applications. This work presents a novel KD framework for SR models, the multi-granularity Mixture of Priors Knowledge Distillation (MiPKD), which can be universally applied to a wide range of architectures at both feature and block levels. The teacher's knowledge is effectively integrated with the student's feature via the Feature Prior Mixer, and the reconstructed feature propagates dynamically in the training phase with the Block Prior Mixer. Extensive experiments illustrate the significance of the proposed MiPKD technique.

Original languageEnglish
Title of host publication13th International Conference on Learning Representations, ICLR 2025
PublisherInternational Conference on Learning Representations, ICLR
Pages27216-27232
Number of pages17
ISBN (Electronic)9798331320850
StatePublished - 2025
Event13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapore
Duration: 24 Apr 202528 Apr 2025

Publication series

Name13th International Conference on Learning Representations, ICLR 2025

Conference

Conference13th International Conference on Learning Representations, ICLR 2025
Country/TerritorySingapore
CitySingapore
Period24/04/2528/04/25

Fingerprint

Dive into the research topics of 'KN OW LEDG E DISTILLATIO N W ITH MU LTI-G RAN U LA RITY MIX TU RE OF PRIORS FOR IM AG E SUPER-RESOLUTION'. Together they form a unique fingerprint.

Cite this