The survey provides an exhaustive overview of research progress in continual learning (CL) for Large Language Models (LLMs), covering various methods like Continual Pre-Training (CPT), Domain-Adaptive Pre-training (DAP), and Continual Fine-Tuning (CFT). It explores strategies to prevent catastrophic forgetting and enhance model adaptability across different tasks and time.
Detailed sections include:
This survey suggests future research directions and considers the intricacies of integrating LLMs into continually evolving data streams.