Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts

Gao, Hongcheng, Pang, Tianyu, Du, Chao, Hu, Taihang, Deng, Zhijie, Lin, Min

arXiv.org Artificial Intelligence 

However, it is observed that even when DMs are properly unlearned before release, malicious finetuning can compromise this process, causing DMs to relearn the unlearned concepts. This occurs partly because certain benign concepts (e.g., "skin") retained in DMs are related to the unlearned ones (e.g., "nudity"), facilitating their relearning via finetuning. To address this, we propose meta-unlearning on DMs. Intuitively, a meta-unlearned DM should behave like an unlearned DM when used as is; moreover, if the meta-unlearned DM undergoes malicious finetuning on unlearned concepts, the related benign concepts retained within it will be triggered to selfdestruct, hindering the relearning of unlearned concepts. Our meta-unlearning framework is compatible with most existing unlearning methods, requiring only the addition of an easy-to-implement meta objective. We validate our approach through empirical experiments on meta-unlearning concepts from Stable Diffusion models (SD-v1-4 and SDXL), supported by extensive ablation studies. Diffusion models (DMs) have achieved remarkable success in generative tasks (Ho et al., 2020; Song et al., 2021), leading to the emergence of large-scale models like Stable Diffusion (SD) for text-to-image generation (Rombach et al., 2022). These challenges have sparked interest in machine unlearning algorithms for DMs (Gandikota et al., 2023; 2024; Kumari et al., 2023; Kim et al., 2023), which modify pretrained models to forget specific inappropriate data (forget set) while retaining performance on the remaining benign data (retain set).