Distribution-Free Predictive Inference under Unknown Temporal Drift

Han, Elise, Huang, Chengpiao, Wang, Kaizheng

arXiv.org Machine Learning 

Due to their complex structures, these models are generally accessed as black boxes. To assess their reliability and safeguard against potential errors, it is important to quantify the uncertainty in their outputs. Predictive inference is a popular methodology for this purpose. It takes as input a prediction algorithm and calibration data, and outputs a prediction set that contains the true outcome with a prescribed probability. The validity of the prediction set hinges on the assumption that the calibration data truthfully represents the underlying environment. However, this assumption is frequently violated in practice, where the data distribution may drift over time. Integrating data from both current and historical periods to construct faithful prediction sets remains a significant challenge. Despite a large body of literature on learning under distribution drift over the past two decades (Hazan and Seshadhri, 2009; Mohri and Muñoz Medina, 2012; Besbes et al., 2015; Hanneke et al., 2015; Mazzetto and Upfal, 2023; Huang and Wang, 2023), statistical inference within this context is much less explored.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found