Deep Contrastive Unlearning for Language Models

He, Estrid, Sarwar, Tabinda, Khalil, Ibrahim, Yi, Xun, Wang, Ke

arXiv.org Artificial Intelligence 

X, XX 2025 1 Deep Contrastive Unlearning for Language Models Estrid He, Tabinda Sarwar, Ibrahim Khalil, Xun Yi, and Ke Wang Abstract --The past a few years have witnessed the great success of large language models, demonstrating powerful capabilities in comprehending textual data and generating humanlike languages. Thus, to safeguard individuals' "right to be forgotten", there has been increasing interests in machine unlearning - the process of removing information carried by particular training samples from a model while not deteriorating its predictive quality. This is a challenging task due to the black-box nature of language models. Most existing studies focus on mitigating the impact of those forgot samples upon a model's outputs, and do not explicitly consider the geometric distributions of samples in the latent space of a model. T o address this issue, we propose a machine unlearning framework, named Deep C ontrastive U nlearning for fine-T uning (DeepCUT) language models. Our proposed model achieves machine unlearning by directly optimizing the latent space of a model. Comprehensive experiments on real-world datasets demonstrate the effectiveness and efficiency of DeepCUT with consistent and significant improvement over baseline methods. I NTRODUCTION I N the existing digital era, the availability of user-contributed data has increased exponentially. The rich and diverse data has been the engine of the significant advancements in the development of natural language processing (NLP) models. In the past a few years, the introduction of Transformer architecture [1] has revolutionized NLP, enabling language models such as BERT [2], RoBERTa [3].