Conformal Prediction with Upper and Lower Bound Models

Li, Miao, Klamkin, Michael, Tanneau, Mathieu, Zandehshahvar, Reza, Van Hentenryck, Pascal

arXiv.org Machine Learning 

Quantifying the uncertainty of machine learning models is crucial for numerous applications, particularly in large-scale real-world scenarios where prediction sets, rather than point predictions, enable more flexible and informed decision making. Uncertainty quantification (UQ) methods are essential for characterizing the unpredictibility arising in various real-world problems across science and engineering. Initially proposed by Vovk et al. [2005], CP is a popular distribution-free method for UQ, largely due to its ability to provide finite-sample coverage guarantees and its computational efficiency. Most studies in CP focus on constructing prediction intervals based on a fitted mean model. This work introduces a novel setting where the value of interest is estimated using only a pair of valid upper and lower bounds, instead of a mean model. While valid bounds themselves provide perfect coverage by definition, they can sometimes be overly conservative. By slightly reducing the coverage level, these bounds can be tightened, resulting in significantly smaller intervals with theoretical guarantees and greater utility for decision making.