Research into parameter efficient tuning of LLMs reveals insights into catastrophic forgetting which occurs when LLMs are fine-tuned on diverse tasks.
Investigates the mode connectivity in LLMs during continual learning.
Presents Interpolation-based LoRA (I-LoRA), a dual-memory experience replay method.
Findings:
Implications for AI: Offers a strong baseline for LLM continual learning research.
The study offers a promising avenue for AI training regimes, potentially making them more robust against forgetting while adapting to new data sets. The research could have significant impact on how we train and deploy AI across sectors. Discover more in their paper.