(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy

Neural Information Processing Systems 

We derive a new, (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data. Prior methods are either vacuous in practice or accurate on average but heavily underestimate error for a sizeable fraction of shifts. In particular, the latter only give guarantees based on complex continuous measures such as test calibration, which cannot be identified without labels, and are therefore unreliable. Instead, our bound requires a simple, intuitive condition which is well justified by prior empirical works and holds in practice effectively 100\% of the time. The bound is inspired by \mathcal{H}\Delta\mathcal{H} -divergence but is easier to evaluate and substantially tighter, consistently providing non-vacuous test error upper bounds.