TY - GEN
T1 - Isolation and Integration
T2 - 12th International Conference on Computational Visual Media, CVM 2024
AU - Zhang, Wei
AU - Xie, Yuan
AU - Zhang, Zhizhong
AU - Tan, Xin
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Continual learning aims to effectively learn from streaming data, adapting to emerging new classes without forgetting old ones. Conventional models without pre-training are constructed from the ground up, suffering from severely catastrophic forgetting. In recent times, pre-training has made significant strides, opening the door to extensive pre-trained models for continual learning. To avoid obvious stage learning bottlenecks in traditional single-backbone networks, we propose a brand-new stage-isolation based class incremental learning framework, which leverages parameter-efficient tuning technique to finetune the pre-trained model for each task, thus mitigating information interference and conflicts among tasks. Simultaneously, it enables the effective utilization of the strong generalization capabilities inherent in pre-trained networks, which can be seamlessly adapted to new tasks. Then, we fuse the features acquired from the training of all backbone networks to construct a unified feature representation. This amalgamated representation retains the distinctive features of each task while incorporating the commonalities shared across all tasks. Finally, we use the selected exemplars to compute the prototype as the classifier weights to make final prediction. We conduct extensive experiments on different class incremental learning benchmarks and settings, results indicate that our method consistently outperforms other methods with a large margin.
AB - Continual learning aims to effectively learn from streaming data, adapting to emerging new classes without forgetting old ones. Conventional models without pre-training are constructed from the ground up, suffering from severely catastrophic forgetting. In recent times, pre-training has made significant strides, opening the door to extensive pre-trained models for continual learning. To avoid obvious stage learning bottlenecks in traditional single-backbone networks, we propose a brand-new stage-isolation based class incremental learning framework, which leverages parameter-efficient tuning technique to finetune the pre-trained model for each task, thus mitigating information interference and conflicts among tasks. Simultaneously, it enables the effective utilization of the strong generalization capabilities inherent in pre-trained networks, which can be seamlessly adapted to new tasks. Then, we fuse the features acquired from the training of all backbone networks to construct a unified feature representation. This amalgamated representation retains the distinctive features of each task while incorporating the commonalities shared across all tasks. Finally, we use the selected exemplars to compute the prototype as the classifier weights to make final prediction. We conduct extensive experiments on different class incremental learning benchmarks and settings, results indicate that our method consistently outperforms other methods with a large margin.
KW - Class-Incremental Learning
KW - Continual Learning
KW - Parameter-Efficient Tuning
KW - Pre-Trained Models
UR - https://www.scopus.com/pages/publications/85190472861
U2 - 10.1007/978-981-97-2092-7_15
DO - 10.1007/978-981-97-2092-7_15
M3 - 会议稿件
AN - SCOPUS:85190472861
SN - 9789819720910
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 302
EP - 315
BT - Computational Visual Media - 12th International Conference, CVM 2024, Proceedings
A2 - Zhang, Fang-Lue
A2 - Sharf, Andrei
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 10 April 2024 through 12 April 2024
ER -