Understanding the Under-Coverage Bias in Uncertainty Estimation

Bai, Yu, Mei, Song, Wang, Huan, Xiong, Caiming

arXiv.org Machine Learning 

This paper is concerned with the problem of uncertainty estimation in regression problems. Uncertainty estimation is an increasingly important task in modern machine learning applications--Models should not only make high-accuracy predictions, but also have a sense of how much the true label may deviate from the prediction. This capability is crucial for deploying machine learning in the real world, in particular in risk-sensitive domains such as medical AI [15, 29], self-driving cars [47], and so on. A common approach for uncertainty estimation in regression is to learn a quantile function or a prediction interval of the true label conditioned on the input, which provides useful distributional information about the label. Such learned quantiles are typically evaluated by their coverage, i.e., probability that it covers the true label on a new test example. For example, a learned 90% upper quantile function should be an actual upper bound of the true label at least 90% of the time. Algorithms for learning quantiles date back to the classical quantile regression [35], which estimates the quantile function by solving an empirical risk minimization problem with a suitable loss function that depends on the desired quantile level α.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found