Recent Advances of Foundation Language Models-based Continual Learning: A Survey

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Recently, foundation language models (LMs) have marked significant achievements in the domains of natural language processing and computer vision. Unlike traditional neural network models, foundation LMs obtain a great ability for transfer learning by acquiring rich common sense knowledge through pre-training on extensive unsupervised datasets with a vast number of parameters. Despite these capabilities, LMs still struggle with catastrophic forgetting, hindering their ability to learn continuously like humans. To address this, continual learning (CL) methodologies have been introduced, allowing LMs to adapt to new tasks while retaining learned knowledge. However, a systematic taxonomy of existing approaches and a comparison of their performance are still lacking. In this article, we delve into a comprehensive review, summarization, and classification of the existing literature on CL-based approaches applied to foundation language models, such as pre-trained language models, large language models, and vision-language models. We divide these studies into offline and online CL, which consist of traditional methods, parameter-efficient-based methods, instruction tuning-based methods and continual pre-training methods. Additionally, we outline the typical datasets and metrics employed in CL research and provide a detailed analysis of the challenges and future work for LMs-based continual learning.

Original languageEnglish
Article number112
JournalACM Computing Surveys
Volume57
Issue number5
DOIs
StatePublished - 9 Jan 2025

Keywords

  • Continual learning
  • foundation language models
  • large language models
  • pre-trained language models
  • survey
  • vision-language models

Fingerprint

Dive into the research topics of 'Recent Advances of Foundation Language Models-based Continual Learning: A Survey'. Together they form a unique fingerprint.

Cite this