adversarial multiclass classification
Adversarial Multiclass Classification: A Risk Minimization Perspective
Recently proposed adversarial classification methods have shown promising results for cost sensitive and multivariate losses. In contrast with empirical risk minimization (ERM) methods, which use convex surrogate losses to approximate the desired non-convex target loss function, adversarial methods minimize non-convex losses by treating the properties of the training data as being uncertain and worst case within a minimax game. Despite this difference in formulation, we recast adversarial classification under zero-one loss as an ERM method with a novel prescribed loss function. We demonstrate a number of theoretical and practical advantages over the very closely related hinge loss ERM methods. This establishes adversarial classification under the zero-one loss as a method that fills the long standing gap in multiclass hinge loss classification, simultaneously guaranteeing Fisher consistency and universal consistency, while also providing dual parameter sparsity and high accuracy predictions in practice.
Reviews: Adversarial Multiclass Classification: A Risk Minimization Perspective
Based on adversarial game formulation proposed in [16], the authors show that the adversarial game formulation is equivalent to an empirical risk minimisation where the loss function is a point wise maximum of 2 { Y } cost functions ( Y is the number of classes). The authors then proved that this loss is Fisher consistent, and archives comparable empirical results to existing methods that are not Fisher consistent, and significantly outperforms the existing consistent method. My main concern is whether the contribution is significant enough. The main result appears to be relating the adversarial game formulation proposed in [16] to the ERM framework (of a rather complicated loss function), and it does appear incremental to me. A second issue I have is that unlike ERM, the "adversarial game" formulation has not been a standard scheme in the machine learning field yet, and more effort may be useful to convince the readers (this reviewer at least) that this is a well motivated framework as opposed to being ad hoc.
Adversarial Multiclass Classification: A Risk Minimization Perspective
Fathony, Rizal, Liu, Anqi, Asif, Kaiser, Ziebart, Brian
Recently proposed adversarial classification methods have shown promising results for cost sensitive and multivariate losses. In contrast with empirical risk minimization (ERM) methods, which use convex surrogate losses to approximate the desired non-convex target loss function, adversarial methods minimize non-convex losses by treating the properties of the training data as being uncertain and worst case within a minimax game. Despite this difference in formulation, we recast adversarial classification under zero-one loss as an ERM method with a novel prescribed loss function. We demonstrate a number of theoretical and practical advantages over the very closely related hinge loss ERM methods. This establishes adversarial classification under the zero-one loss as a method that fills the long standing gap in multiclass hinge loss classification, simultaneously guaranteeing Fisher consistency and universal consistency, while also providing dual parameter sparsity and high accuracy predictions in practice. Papers published at the Neural Information Processing Systems Conference.