OBLIVIATE: Robust and Practical Machine Unlearning for Large Language Models
Xu, Xiaoyu, Du, Minxin, Ye, Qingqing, Hu, Haibo
–arXiv.org Artificial Intelligence
Large language models (LLMs) trained over extensive corpora risk memorizing sensitive, copyrighted, or toxic content. To address this, we propose \textbf{OBLIVIATE}, a robust unlearning framework that removes targeted data while preserving model utility. The framework follows a structured process: extracting target tokens, building retain sets, and fine-tuning with a tailored loss function comprising three components -- masking, distillation, and world fact. Using low-rank adapters (LoRA) ensures efficiency without compromising unlearning quality. We conduct experiments on multiple datasets, including Harry Potter series, WMDP, and TOFU, using a comprehensive suite of metrics: \emph{forget quality} (via a new document-level memorization score), \emph{model utility}, and \emph{fluency}. Results demonstrate its effectiveness in resisting membership inference attacks, minimizing the impact on retained data, and maintaining robustness across diverse scenarios.
arXiv.org Artificial Intelligence
Sep-10-2025
- Country:
- Asia > China
- Hong Kong (0.04)
- North America > United States
- Virginia (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology: