TY - JOUR
T1 - Bias to Balance
T2 - New-Knowledge-Preferred Few-Shot Class-Incremental Learning via Transition Calibration
AU - Zhang, Hongquan
AU - Zhang, Zhizhong
AU - Tan, Xin
AU - Qu, Yanyun
AU - Xie, Yuan
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Humans can quickly learn new concepts with limited experience, while not forgetting learned knowledge. Such ability in machine learning is referred to as few-shot class-incremental learning (FSCIL). Although some methods try to solve this problem by putting similar efforts to prevent forgetting and promote learning, we find existing techniques do not give enough importance to the new category as new training samples are rather rare. In this article, we propose a new biased-to-unbiased rectification method, which introduces a trainable transition matrix to mitigate the prediction discrepancy between the old classes and the new classes. This transition matrix is to be diagonally dominated, normalized, and differentiable with new-knowledge-preferred prior, to solving the strong bias between heavy old knowledge and limited new knowledge. Hence, we can achieve a balanced solution between learning new concepts and preventing catastrophic forgetting by giving new classes more chances. Extensive experiments on miniImagenet, CIFAR100, and CUB200 demonstrate that our method outperforms the latest state-of-the-art methods by 1.1%, 1.44%, and 2.08%, respectively.
AB - Humans can quickly learn new concepts with limited experience, while not forgetting learned knowledge. Such ability in machine learning is referred to as few-shot class-incremental learning (FSCIL). Although some methods try to solve this problem by putting similar efforts to prevent forgetting and promote learning, we find existing techniques do not give enough importance to the new category as new training samples are rather rare. In this article, we propose a new biased-to-unbiased rectification method, which introduces a trainable transition matrix to mitigate the prediction discrepancy between the old classes and the new classes. This transition matrix is to be diagonally dominated, normalized, and differentiable with new-knowledge-preferred prior, to solving the strong bias between heavy old knowledge and limited new knowledge. Hence, we can achieve a balanced solution between learning new concepts and preventing catastrophic forgetting by giving new classes more chances. Extensive experiments on miniImagenet, CIFAR100, and CUB200 demonstrate that our method outperforms the latest state-of-the-art methods by 1.1%, 1.44%, and 2.08%, respectively.
KW - Classification bias
KW - few-shot
KW - incremental learning
KW - transition calibration
UR - https://www.scopus.com/pages/publications/105002827675
U2 - 10.1109/TNNLS.2025.3550429
DO - 10.1109/TNNLS.2025.3550429
M3 - 文章
AN - SCOPUS:105002827675
SN - 2162-237X
VL - 36
SP - 15347
EP - 15358
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 8
ER -