Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models
–Neural Information Processing Systems
Machine unlearning (MU) empowers individuals with the'right to be forgotten' by removing their private or sensitive information encoded in machine learning models. However, it remains uncertain whether MU can be effectively applied to Multimodal Large Language Models (MLLMs), particularly in scenarios of forgetting the leaked visual data of concepts. To overcome the challenge, we propose an efficient method, Single Image Unlearning (SIU), to unlearn the visual recognition of a concept by fine-tuning a single associated image for few steps. SIU consists of two key aspects: (i) Constructing Multifaceted fine-tuning data. We introduce four targets, based on which we construct fine-tuning data for the concepts to be forgotten; (ii) Joint training loss. To synchronously forget the visual recognition of concepts and preserve the utility of MLLMs, we fine-tune MLLMs through a novel Dual Masked KL-divergence Loss combined with Cross Entropy loss. Alongside our method, we establish MMUBench, a new benchmark for MU in MLLMs and introduce a collection of metrics for its evaluation.
Neural Information Processing Systems
May-29-2025, 05:59:16 GMT
- Country:
- Europe > United Kingdom
- Scotland (0.14)
- North America > United States (1.00)
- Europe > United Kingdom
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Government > Regional Government
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Leisure & Entertainment (0.93)
- Media (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Natural Language > Large Language Model (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence