Risk Bounds and Calibration for a Smart Predict-then-Optimize Method

Neural Information Processing Systems 

The predict-then-optimize framework is fundamental in practical stochastic decision-making problems: first predict unknown parameters of an optimization model, then solve the problem using the predicted values. A natural loss function in this setting is defined by measuring the decision error induced by the predicted parameters, which was named the Smart Predict-then-Optimize (SPO) loss by Elmachtoub and Grigas [2021]. Since the SPO loss is typically nonconvex and possibly discontinuous, Elmachtoub and Grigas [2021] introduced a convex surrogate, called the SPO loss, that importantly accounts for the underlying structure of the optimization model. In this paper, we greatly expand upon the consistency results for the SPO loss provided by Elmachtoub and Grigas [2021]. We develop risk bounds and uniform calibration results for the SPO loss relative to the SPO loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk.