Spurious Forgetting in Continual Learning of Language Models
Zheng, Junhao, Cai, Xidi, Qiu, Shengjie, Ma, Qianli
–arXiv.org Artificial Intelligence
Recent advancements in large language models (LLMs) reveal a perplexing phenomenon in continual learning: despite extensive training, models experience significant performance declines, raising questions about task alignment and underlying knowledge retention. This study first explores the concept of "spurious forgetting", proposing that such performance drops often reflect a decline in task alignment rather than true knowledge loss. Through controlled experiments with a synthesized dataset, we investigate the dynamics of model performance during the initial training phases of new tasks, discovering that early optimization steps can disrupt previously established task alignments. Our theoretical analysis connects these shifts to orthogonal updates in model weights, providing a robust framework for understanding this behavior. Ultimately, we introduce a Freezing strategy that fix the bottom layers of the model, leading to substantial improvements in four continual learning scenarios. Our findings underscore the critical distinction between task alignment and knowledge retention, paving the way for more effective strategies in continual learning.
arXiv.org Artificial Intelligence
Jan-23-2025
- Country:
- Asia > China (0.04)
- North America > United States
- California
- Alameda County > Berkeley (0.04)
- Los Angeles County > Los Angeles (0.14)
- Orange County > Irvine (0.04)
- Sacramento County > Elk Grove (0.04)
- Santa Clara County > Palo Alto (0.04)
- Yolo County > Davis (0.04)
- Kansas (0.04)
- New York > New York County
- New York City (0.04)
- California
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Government > Regional Government (0.67)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Leisure & Entertainment (0.92)
- Technology: