Do Unlearning Methods Remove Information from Language Model Weights?

Deeb, Aghyad, Roger, Fabien

arXiv.org Artificial Intelligence 

Large Language Models' knowledge of how to perform cyber-security attacks, create bioweapons, and manipulate humans poses risks of misuse. Previous work has proposed methods to unlearn this knowledge. Historically, it has been unclear whether unlearning techniques are removing information from the model weights or just making it harder to access. To disentangle these two objectives, we propose an adversarial evaluation method to test for the removal of information from model weights: we give an attacker access to some facts that were supposed to be removed, and using those, the attacker tries to recover other facts from the same distribution that cannot be guessed from the accessible facts. We show that using fine-tuning on the accessible facts can recover 88% of the pre-unlearning accuracy when applied to current unlearning methods, revealing the limitations of these methods in removing information from the model weights. During pretraining, Large Language Models (LLMs) acquire many capabilities, both intended and unintended (Wei et al., 2022). These capabilities have raised concerns about LLMs acquiring dangerous capabilities that can be exploited by malicious actors, such as assisting in cyber-attacks or creating bioweapons (Fang et al., 2024). Acknowledging these threats, the Executive Order on Artificial Intelligence (White House, 2023) has emphasized the importance of responsible development of AI models. To address these concerns, LLMs are typically trained to refuse to engage in dangerous activities. Refusal is vulnerable to jailbreak techniques (Wei et al., 2023; Zou et al., 2023; Liu et al., 2024b) and other attacks. Figure 1: Our approach to evaluate unlearning: we try to recover potentially hidden facts by retraining on facts independent of the facts used for evaluation but coming from the same distribution (left). Using this procedure, we find that we are able to recover a large fraction of performance when using state-of-the-art unlearning methods like RMU (Li et al., 2024b) (right).