Prototype Alignment with LoRA Fusion for Class-Incremental Learning

Research output: Contribution to journalConference articlepeer-review

Abstract

Recent advancements in pre-trained models have enhanced performance on downstream tasks due to their strong generalizability. Despite this, models fine-tuned continually often face challenges such as catastrophic forgetting and loss of generalization. To address these issues, we propose a novel approach that utilizes distinct Low-Rank Adaptation (LoRA) modules for each task. These modules parameter-efficient, and integrated across tasks to ensure the model maintains strong performance on both old and new classes. Additionally, we investigate semantic relationships between class prototypes to effectively reconstruct old prototypes in the context of new tasks. Our experiments demonstrate that this method significantly outperforms baseline approaches across various class-incremental learning benchmarks, offering an efficient and effective solution for mitigating forgetting and preserving model performance.

Keywords

  • continual learning
  • incremental learning
  • low-rank adaptation
  • parameter-efficient fine tuning

Fingerprint

Dive into the research topics of 'Prototype Alignment with LoRA Fusion for Class-Incremental Learning'. Together they form a unique fingerprint.

Cite this