Preference-Consistent Knowledge Distillation for Recommender System

  • Zhangchi Zhu
  • , Wei Zhang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Feature-based knowledge distillation has been applied to compress modern recommendation models, usually with projectors that align student (small) recommendation models’ dimensions with teacher dimensions. However, existing studies have only focused on making the projected features (i.e., student features after projectors) similar to teacher features, overlooking investigating whether the user preference can be transferred to student features (i.e., student features before projectors) in this manner. In this paper, we find that due to the lack of restrictions on projectors, the process of transferring user preferences will likely be interfered with. We refer to this phenomenon as preference inconsistency. It greatly wastes the power of feature-based knowledge distillation. To mitigate preference inconsistency, we propose PCKD, which consists of two regularization terms for projectors. We also propose a hybrid method that combines the two regularization terms. We focus on items with high preference scores and significantly mitigate preference inconsistency, improving the performance of feature-based knowledge distillation. Extensive experiments on three public datasets and three backbones demonstrate the effectiveness of PCKD.

Original languageEnglish
Pages (from-to)2071-2084
Number of pages14
JournalIEEE Transactions on Knowledge and Data Engineering
Volume37
Issue number4
DOIs
StatePublished - 2025

Keywords

  • Recommender system
  • knowledge distillation
  • model compression

Fingerprint

Dive into the research topics of 'Preference-Consistent Knowledge Distillation for Recommender System'. Together they form a unique fingerprint.

Cite this