Direct Token Optimization: A Self-contained Approach to Large Language Model Unlearning

Lee, Hong kyu, Liu, Ruixuan, Xiong, Li

arXiv.org Artificial Intelligence 

Machine unlearning is an emerging technique that removes the influence of a subset of training data (forget set) from a model without full retraining, with applications including privacy protection, content moderation, and model correction. The key challenge lies in ensuring that the model completely forgets the knowledge of the forget set without compromising its overall utility. Existing unlearning methods for large language models (LLMs) often utilize auxiliary language models, retain datasets, or even commercial AI services for effective unlearning and maintaining the model utility. However, dependence on these external resources is often impractical and could potentially introduce additional privacy risks. In this work, we propose direct token optimization (DTO), a novel self-contained unlearning approach for LLMs that directly optimizes the token level objectives and eliminates the need for external resources. Given a sequence to unlearn, we identify two categories of tokens: target tokens, which capture critical knowledge for unlearning, and the remaining non-target tokens, which are crucial for maintaining the model utility. The former are used to optimize the unlearning objective, while the latter serve to preserve the model's performance. The experimental results show that the proposed DTO achieves up to 16.8 improvement in forget quality on several benchmark datasets than the latest baselines while maintaining a comparable level of model utility. Machine unlearning aims to remove the effect of a subset of training data (referred to as the forget set) from a trained model (Cao & Y ang, 2015). The concept was introduced in response to data protection regulations such as General Data Protection Regulation (GDPR) (Mantelero, 2013), which established the'right to be forgotten'. A successfully unlearned model should fully eliminate the influence of forget set (unlearning efficacy), and preserve overall performance (model utility). Additionally, the unlearning algorithm should be more efficient than retraining (efficiency).