Abstract
Recent advancements in pre-trained models have enhanced performance on downstream tasks due to their strong generalizability. Despite this, models fine-tuned continually often face challenges such as catastrophic forgetting and loss of generalization. To address these issues, we propose a novel approach that utilizes distinct Low-Rank Adaptation (LoRA) modules for each task. These modules parameter-efficient, and integrated across tasks to ensure the model maintains strong performance on both old and new classes. Additionally, we investigate semantic relationships between class prototypes to effectively reconstruct old prototypes in the context of new tasks. Our experiments demonstrate that this method significantly outperforms baseline approaches across various class-incremental learning benchmarks, offering an efficient and effective solution for mitigating forgetting and preserving model performance.
| Original language | English |
|---|---|
| Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
| DOIs | |
| State | Published - 2025 |
| Event | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025 - Hyderabad, India Duration: 6 Apr 2025 → 11 Apr 2025 |
Keywords
- continual learning
- incremental learning
- low-rank adaptation
- parameter-efficient fine tuning