Exploring Forgetting in Large Language Model Pre-Training
Liao, Chonghua, Xie, Ruobing, Sun, Xingwu, Sun, Haowen, Kang, Zhanhui
–arXiv.org Artificial Intelligence
Catastrophic forgetting remains a formidable obstacle to building an omniscient model in large language models (LLMs). Despite the pioneering research on task-level forgetting in LLM fine-tuning, there is scant focus on forgetting during pre-training. We systematically explored the existence and measurement of forgetting in pre-training, questioning traditional metrics such as perplexity (PPL) and introducing new metrics to better detect entity memory retention. Based on our revised assessment of forgetting metrics, we explored low-cost, straightforward methods to mitigate forgetting during the pre-training phase. Further, we carefully analyzed the learning curves, offering insights into the dynamics of forgetting. Extensive evaluations and analyses on forgetting of pre-training could facilitate future research on LLMs.
arXiv.org Artificial Intelligence
Oct-22-2024
- Country:
- North America > United States (0.68)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: