Conformal Prediction: A Data Perspective

Zhou, Xiaofan, Chen, Baiting, Gui, Yu, Cheng, Lu

arXiv.org Artificial Intelligence 

The recent rapid development of well-designed and powerful machine learning (ML) models has significantly transformed our lives. However, the success of these models is often evaluated based on the accuracy of their predictions, which, while important, is not sufficient in many real-world scenarios. In high-stakes applications, it is equally critical to assess the uncertainty of model outputs. Uncertainty quantification (UQ) has long been a central problem in fields like statistics and ML. Several well-established methods, such as Bayesian inference and resampling techniques, have been widely adopted to address UQ. However, Bayesian posterior intervals are only valid if the parametric assumptions of the model are correctly specified, which may not always be the case in practical applications.