TY - JOUR
T1 - Potential Knowledge Extraction Network for Class-Incremental Learning
AU - Xi, Xidong
AU - Cao, Guitao
AU - Cao, Wenming
AU - Liu, Yong
AU - Li, Yan
AU - Wang, Hong
AU - Ren, He
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2025/2/1
Y1 - 2025/2/1
N2 - Class-Incremental Learning (CIL) aims to dynamically learn new classes without forgetting the old ones, and it is typically achieved by extracting knowledge from old data and continuously transferring it to new tasks. In the replay-based approaches, selecting appropriate exemplars is of great importance since exemplars represent the most direct form of retaining old knowledge. In this paper, we propose a novel CIL framework: Potential Knowledge Extraction Network (PKENet), which addresses the issue of neglecting the knowledge of inter-sample relation in most existing works and suggests an innovative approach for exemplar selection. Specifically, to address the challenge of knowledge transfer, we design a relation consistency loss and a hybrid cross-entropy loss, where the former works by extracting structural knowledge from the old model while the latter captures graph-wise knowledge, enabling the new model to acquire more old knowledge. To enhance the anti-forgetting effect of exemplar set, we devise a maximum-forgetting-priority method for selecting samples most susceptible to interference from the model's update. To overcome the prediction bias problem in CIL, we introduce the Total Direct Effect inference method into our model. Experimental results on CIFAR100, ImageNet-Full and ImageNet-Subset datasets show that multiple state-of-the-art CIL methods can be directly combined with our PKENet to reap significant performance improvement. Code: https://github.com/XXDyeah/PKENet.
AB - Class-Incremental Learning (CIL) aims to dynamically learn new classes without forgetting the old ones, and it is typically achieved by extracting knowledge from old data and continuously transferring it to new tasks. In the replay-based approaches, selecting appropriate exemplars is of great importance since exemplars represent the most direct form of retaining old knowledge. In this paper, we propose a novel CIL framework: Potential Knowledge Extraction Network (PKENet), which addresses the issue of neglecting the knowledge of inter-sample relation in most existing works and suggests an innovative approach for exemplar selection. Specifically, to address the challenge of knowledge transfer, we design a relation consistency loss and a hybrid cross-entropy loss, where the former works by extracting structural knowledge from the old model while the latter captures graph-wise knowledge, enabling the new model to acquire more old knowledge. To enhance the anti-forgetting effect of exemplar set, we devise a maximum-forgetting-priority method for selecting samples most susceptible to interference from the model's update. To overcome the prediction bias problem in CIL, we introduce the Total Direct Effect inference method into our model. Experimental results on CIFAR100, ImageNet-Full and ImageNet-Subset datasets show that multiple state-of-the-art CIL methods can be directly combined with our PKENet to reap significant performance improvement. Code: https://github.com/XXDyeah/PKENet.
KW - Class-Incremental Learning
KW - Debiased prediction
KW - Exemplar selection
KW - Inter-sample relation
KW - Potential knowledge extraction
UR - https://www.scopus.com/pages/publications/85209658233
U2 - 10.1016/j.neucom.2024.128923
DO - 10.1016/j.neucom.2024.128923
M3 - 文章
AN - SCOPUS:85209658233
SN - 0925-2312
VL - 616
JO - Neurocomputing
JF - Neurocomputing
M1 - 128923
ER -