Improving Predictor Reliability with Selective Recalibration
Zollo, Thomas P., Deng, Zhun, Snell, Jake C., Pitassi, Toniann, Zemel, Richard
–arXiv.org Artificial Intelligence
A reliable deep learning system should be able to accurately express its confidence with respect to its predictions, a quality known as calibration. One of the most effective ways to produce reliable confidence estimates with a pre-trained model is by applying a post-hoc recalibration method. Popular recalibration methods like temperature scaling are typically fit on a small amount of data and work in the model's output space, as opposed to the more expressive feature embedding space, and thus usually have only one or a handful of parameters. However, the target distribution to which they are applied is often complex and difficult to fit well with such a function. To this end we propose \textit{selective recalibration}, where a selection model learns to reject some user-chosen proportion of the data in order to allow the recalibrator to focus on regions of the input space that can be well-captured by such a model. We provide theoretical analysis to motivate our algorithm, and test our method through comprehensive experiments on difficult medical imaging and zero-shot classification tasks. Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
arXiv.org Artificial Intelligence
Oct-7-2024
- Country:
- Europe (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine > Diagnostic Medicine (0.34)
- Technology: