Model Sparsity Can Simplify Machine Unlearning
–Neural Information Processing Systems
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process to remove the influence of specific examples from a given model. Although exact unlearning can be achieved through complete model retraining using the remaining dataset, the associated computational costs have driven the development of efficient, approximate unlearning techniques. Moving beyond data-centric MU approaches, our study introduces a novel model-based perspective: model sparsification via weight pruning, which is capable of reducing the gap between exact unlearning and approximate unlearning. We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner, closing the approximation gap, while continuing to be efficient. This leads to a new MU paradigm, termed prune first, then unlearn, which infuses a sparse model prior into the unlearning process.
Neural Information Processing Systems
May-25-2025, 07:13:05 GMT
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (0.67)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.46)
- Natural Language (1.00)
- Vision (1.00)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology