LLM Unlearning using Gradient Ratio-Based Influence Estimation and Noise Injection

Anjarlekar, Ameya, Pombra, Sandeep

arXiv.org Artificial Intelligence 

Existing empirical methods often yield incomplete forgetting or unintended degradation of unrelated knowledge due to poor localization. In this work, we propose GRIN: a modular and targeted framework for LLM unlearning. GRIN introduces a novel gradient-ratio-based metric to identify parameters most responsible for memorizing forget data. We then perform selective noise injection into these parameters prior to fine-tuning, which improves unlearning performance while maintaining model utility. Finally, we propose new evaluation metrics tailored to the LLM setting and validate our approach on standard benchmarks such as TOFU, WMDP, and SafePKU. Content Warning: This paper contains examples of critically harmful language.