Inducing Uncertainty on Open-Weight Models for Test-Time Privacy in Image Recognition
Ashiq, Muhammad H., Triantafillou, Peter, Tseng, Hung Yun, Chrysos, Grigoris G.
–arXiv.org Artificial Intelligence
A key concern for AI safety remains understudied in the machine learning (ML) literature: how can we ensure users of ML models do not leverage predictions on incorrect personal data to harm others? This is particularly pertinent given the rise of open-weight models, where simply masking model outputs does not suffice to prevent adversaries from recovering harmful predictions. To address this threat, which we call *test-time privacy*, we induce maximal uncertainty on protected instances while preserving accuracy on all other instances. Our proposed algorithm uses a Pareto optimal objective that explicitly balances test-time privacy against utility. We also provide a certifiable approximation algorithm which achieves $(\varepsilon, δ)$ guarantees without convexity assumptions. We then prove a tight bound that characterizes the privacy-utility tradeoff that our algorithms incur. Empirically, our method obtains at least $>3\times$ stronger uncertainty than pretraining with marginal drops in accuracy on various image recognition benchmarks. Altogether, this framework provides a tool to guarantee additional protection to end users.
arXiv.org Artificial Intelligence
Oct-1-2025
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Banking & Finance > Insurance (0.93)
- Health & Medicine > Therapeutic Area
- Dermatology (0.46)
- Information Technology > Security & Privacy (1.00)
- Technology: