Early Detection and Reduction of Memorisation for Domain Adaptation and Instruction Tuning
Slack, Dean L., Moubayed, Noura Al
–arXiv.org Artificial Intelligence
Most defences target the pre-training stage, leaving memorisation during fine-tuning--especially for domain adaptation and instruction tuning--poorly understood. We fine-tune Pythia, Llama3, and Mistral models spanning 1.4B-70B parameters on common evaluation datasets and track verbatim memorisation throughout training. We find that memorisation increases dramatically in the first few epochs, often significantly before either validation perplexity or evaluation performance is op-timised. We use a simple but effective n-gram memorisation score which reliably precedes verbatim memorisation; using it as an early-stopping criterion mitigates memorisation with minimal performance loss. Further, we introduce an n-gram-aware loss regulariser and show that it reduces memorisation across all model families tested by up to 40% while minimising evaluation performance trade-offs when compared to an existing memorisation mitigation strategy. These results yield practical, scalable insights into memorisation dynamics during language model fine-tuning.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Belgium > Brussels-Capital Region
- North America
- Dominican Republic (0.04)
- United States
- California > San Diego County
- San Diego (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- California > San Diego County
- South America > Brazil
- Rio de Janeiro > Rio de Janeiro (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.68)
- Information Technology > Security & Privacy (1.00)
- Technology: