跳到主要导航 跳到搜索 跳到主要内容

Potential Knowledge Extraction Network for Class-Incremental Learning

  • Xidong Xi
  • , Guitao Cao*
  • , Wenming Cao
  • , Yong Liu
  • , Yan Li
  • , Hong Wang
  • , He Ren
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Class-Incremental Learning (CIL) aims to dynamically learn new classes without forgetting the old ones, and it is typically achieved by extracting knowledge from old data and continuously transferring it to new tasks. In the replay-based approaches, selecting appropriate exemplars is of great importance since exemplars represent the most direct form of retaining old knowledge. In this paper, we propose a novel CIL framework: Potential Knowledge Extraction Network (PKENet), which addresses the issue of neglecting the knowledge of inter-sample relation in most existing works and suggests an innovative approach for exemplar selection. Specifically, to address the challenge of knowledge transfer, we design a relation consistency loss and a hybrid cross-entropy loss, where the former works by extracting structural knowledge from the old model while the latter captures graph-wise knowledge, enabling the new model to acquire more old knowledge. To enhance the anti-forgetting effect of exemplar set, we devise a maximum-forgetting-priority method for selecting samples most susceptible to interference from the model's update. To overcome the prediction bias problem in CIL, we introduce the Total Direct Effect inference method into our model. Experimental results on CIFAR100, ImageNet-Full and ImageNet-Subset datasets show that multiple state-of-the-art CIL methods can be directly combined with our PKENet to reap significant performance improvement. Code: https://github.com/XXDyeah/PKENet.

源语言英语
文章编号128923
期刊Neurocomputing
616
DOI
出版状态已出版 - 1 2月 2025

指纹

探究 'Potential Knowledge Extraction Network for Class-Incremental Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此