When unlearning is free: leveraging low influence points to reduce computational costs
Kleiman, Anat, Fisher, Robert, Deaner, Ben, Wieder, Udi
–arXiv.org Artificial Intelligence
As concerns around data privacy in machine learning grow, the ability to unlearn, or remove, specific data points from trained models becomes increasingly important. While state of the art unlearning methods have emerged in response, they typically treat all points in the forget set equally. In this work, we challenge this approach by asking whether points that have a negligible impact on the model's learning need to be removed. Through a comparative analysis of influence functions across language and vision tasks, we identify subsets of training data with negligible impact on model outputs. Leveraging this insight, we propose an efficient unlearning framework that reduces the size of datasets before unlearning leading to significant computational savings (up to approximately 50 percent) on real world empirical examples.
arXiv.org Artificial Intelligence
Dec-8-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California > Santa Clara County > Palo Alto (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Statistical Learning (0.93)
- Natural Language (0.94)
- Representation & Reasoning (0.95)
- Vision (1.00)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology