Minimax Classification with 0-1 Loss and Performance Guarantees

Neural Information Processing Systems 

Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate losses over specific families of rules. This paper presents minimax risk classifiers (MRCs) that do not rely on a choice of surrogate loss and family of rules. MRCs achieve efficient learning and out-of-sample generalization by minimizing worst-case expected 0-1 loss w.r.t. In addition, MRCs' learning stage provides performance guarantees as lower and upper tight bounds for expected 0-1 loss.