Goto

Collaborating Authors

 Fan, Chongyu


Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond

arXiv.org Artificial Intelligence

With the rapid advancement of large language models (LLMs), concerns about their privacy, safety, and trustworthiness, have become increasingly prominent (Liu et al., 2024d; Barez et al., 2025). However, retraining these models to eliminate the undesired data-model influence is often infeasible due to the significant computational and time costs involved. To address this challenge, LLM unlearning (Yao et al., 2024; Eldan & Russinovich, 2023; Maini et al., 2024; Liu et al., 2024b) has emerged as a post-pretraining strategy, which aims to mitigate the impact of undesirable data (e.g., sensitive, biased, unsafe, or illegal information) and suppress associated model capabilities, thereby preventing LLMs from generating harmful content while simultaneously preserving the model's utility post-unlearning. Despite the increasing importance of LLM unlearning, several recent studies (ลucki et al., 2024; Zhang et al., 2024e; Lynch et al., 2024; Hu et al., 2024; Deeb & Roger, 2024) have identified a critical issue: LLM unlearning often lacks robustness. Specifically, the susceptibility to quickly recovering'already-unlearned' knowledge post-unlearning is evident through so-called relearning attacks (Lynch et al., 2024; Hu et al., 2024). These attacks can effectively reverse the unlearning process by leveraging lightweight fine-tuning on the unlearned model using only a small number of data from the forget dataset.


Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning

arXiv.org Artificial Intelligence

In this work, we address the problem of large language model (LLM) unlearning, aiming to remove unwanted data influences and associated model capabilities (e.g., copyrighted data or harmful content generation) while preserving essential model utilities, without the need for retraining from scratch. Despite the growing need for LLM unlearning, a principled optimization framework remains lacking. To this end, we revisit the state-of-the-art approach, negative preference optimization (NPO), and identify the issue of reference model bias, which could undermine NPO's effectiveness, particularly when unlearning forget data of varying difficulty. Given that, we propose a simple yet effective unlearning optimization framework, called SimNPO, showing that 'simplicity' in removing the reliance on a reference model (through the lens of simple preference optimization) benefits unlearning. We also provide deeper insights into SimNPO's advantages, supported by analysis using mixtures of Markov chains. Furthermore, we present extensive experiments validating SimNPO's superiority over existing unlearning baselines in benchmarks like TOFU and MUSE, and robustness against relearning attacks. Codes are available at https://github.com/OPTML-Group/Unlearn-Simple.


Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

arXiv.org Artificial Intelligence

The trustworthy machine learning (ML) community is increasingly recognizing the crucial need for models capable of selectively 'unlearning' data points after training. This leads to the problem of machine unlearning (MU), aiming to eliminate the influence of chosen data points on model performance, while still maintaining the model's utility post-unlearning. Despite various MU methods for data influence erasure, evaluations have largely focused on random data forgetting, ignoring the vital inquiry into which subset should be chosen to truly gauge the authenticity of unlearning performance. To tackle this issue, we introduce a new evaluative angle for MU from an adversarial viewpoint. We propose identifying the data subset that presents the most significant challenge for influence erasure, i.e., pinpointing the worst-case forget set. Utilizing a bi-level optimization principle, we amplify unlearning challenges at the upper optimization level to emulate worst-case scenarios, while simultaneously engaging in standard training and unlearning at the lower level, achieving a balance between data influence erasure and model utility. Our proposal offers a worst-case evaluation of MU's resilience and effectiveness. Through extensive experiments across different datasets (including CIFAR-10, 100, CelebA, Tiny ImageNet, and ImageNet) and models (including both image classifiers and generative models), we expose critical pros and cons in existing (approximate) unlearning strategies. Our results illuminate the complex challenges of MU in practice, guiding the future development of more accurate and robust unlearning algorithms. The code is available at https://github.com/OPTML-Group/Unlearn-WorstCase.


SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation

arXiv.org Artificial Intelligence

With evolving data regulations, machine unlearning (MU) has become an important tool for fostering trust and safety in today's AI models. However, existing MU methods focusing on data and/or weight perspectives often grapple with limitations in unlearning accuracy, stability, and cross-domain applicability. To address these challenges, we introduce the concept of 'weight saliency' in MU, drawing parallels with input saliency in model explanation. This innovation directs MU's attention toward specific model weights rather than the entire model, improving effectiveness and efficiency. The resultant method that we call saliency unlearning (SalUn) narrows the performance gap with 'exact' unlearning (model retraining from scratch after removing the forgetting dataset). To the best of our knowledge, SalUn is the first principled MU approach adaptable enough to effectively erase the influence of forgetting data, classes, or concepts in both image classification and generation. For example, SalUn yields a stability advantage in high-variance random data forgetting, e.g., with a 0.2% gap compared to exact unlearning on the CIFAR-10 dataset. Moreover, in preventing conditional diffusion models from generating harmful images, SalUn achieves nearly 100% unlearning accuracy, outperforming current state-of-the-art baselines like Erased Stable Diffusion and Forget-Me-Not.