Blackwell's Approachability for Sequential Conformal Inference

Principato, Guillaume, Stoltz, Gilles

arXiv.org Machine Learning 

Conformal inference [Vovk et al., 2005] provides a general procedure for constructing prediction sets with guaranteed coverage, under the assumption that the data are exchangeable. This assumption, however, is often too restrictive: it typically fails in sequential or time-dependent settings such as time series forecasting, where the distribution of observations may shift over time. To address this issue, Gibbs and Cand` es [2021] introduced Adaptive Conformal Inference (ACI), which extends Conformal Prediction (CP) to adversarial environments. ACI adapts to distribution shifts by updating prediction intervals in response to observed outcomes, ensuring that the empirical coverage converges to the desired level. While effective in maintaining coverage, ACI and its extensions generally lack efficiency guarantees-for instance, there is no control over the average length of prediction intervals in adversarial regimes. In this work, we study sequential conformal inference as a repeated two-player finite game and invoke Blackwell's theory of approachability to characterize feasible objectives. Building on this perspective, we design a calibration-based algorithm that ensures asymptotic validity while achieving asymptotic efficiency under mild assumptions. Our approach relies on the notion of opportunistic approachability [Bernstein et al., 2014], which allows the learner to exploit potential restrictions in the opponent's play. We argue that such assumptions better fit the typical use cases of ACI-such as distributional drift or regime switching-than the fully adversarial setting.