Rayan, Sahana
Learning to Partially Defer for Sequences
Rayan, Sahana, Tewari, Ambuj
In the Learning to Defer (L2D) framework, a prediction model can either make a prediction or defer it to an expert, as determined by a rejector. Current L2D methods train the rejector to decide whether to reject the entire prediction, which is not desirable when the model predicts long sequences. We present an L2D setting for sequence outputs where the system can defer specific outputs of the whole model prediction to an expert in an effort to interleave the expert and machine throughout the prediction. We propose two types of model-based post-hoc rejectors for pre-trained predictors: a token-level rejector, which defers specific token predictions to experts with next token prediction capabilities, and a one-time rejector for experts without such abilities, which defers the remaining sequence from a specific point onward. In the experiments, we also empirically demonstrate that such granular deferrals achieve better cost-accuracy tradeoffs than whole deferrals on Traveling salesman solvers and News summarization models.
Conformal Contextual Robust Optimization
Patel, Yash, Rayan, Sahana, Tewari, Ambuj
Predict-then-optimize or contextual robust optimization problems are of long-standing interest in safety-critical settings where decision-making happens under uncertainty (Sun, Liu, and Li, 2023; Elmachtoub and Grigas, 2022; Elmachtoub, Liang, and McNellis, 2020; Peršak and Anjos, 2023). In traditional robust optimization, results are made to be robust to distributions anticipated to be present upon deployment (Ben-Tal, El Ghaoui, and Nemirovski, 2009; Beyer and Sendhoff, 2007). Since such decisions are sensitive to proper model specification, recent efforts have sought to supplant this with data-driven uncertainty regions (Cheramin et al., 2021; Bertsimas, Gupta, and Kallus, 2018; Shang and You, 2019; Johnstone and Cox, 2021). Model misspecification is ever more present in contextual robust optimization, spurring efforts to define similar datadriven uncertainty regions (Ohmori, 2021; Chenreddy, Bandi, and Delage, 2022; Sun, Liu, and Li, 2023). Such methods, however, focus on box-and ellipsoid-based uncertainty regions, both of which are necessarily convex and often overly conservative, resulting in suboptimal decision-making. Conformal prediction provides a principled framework for producing distribution-free prediction regions with marginal frequentist coverage guarantees (Angelopoulos and Bates, 2021; Shafer and Vovk, 2008).