Consistent Robust Adversarial Prediction for General Multiclass Classification
Fathony, Rizal, Asif, Kaiser, Liu, Anqi, Bashiri, Mohammad Ali, Xing, Wei, Behpour, Sima, Zhang, Xinhua, Ziebart, Brian D.
Some example of the task are the zero-one loss classification where the predictor suffers a loss of one when making incorrect prediction and zero otherwise as well as the ordinal classification (also known as ordinal regression) where the predictor suffers a loss that increases as the prediction moves away from the true label. Empirical risk minimization (ERM) (Vapnik, 1992) is a standard approach for solving general multiclass classification problems by finding the classifier that minimizes a loss metric over the training data. However, since directly minimizing this loss over training data within the ERM framework is generally NPhard (Steinwart and Christmann, 2008), convex surrogate losses that can be efficiently optimized are employed to approximate the loss. Constructing surrogate losses for binary classification has been well studied, resulting in surrogate losses that enjoy desirable theoretical properties and good performance in practice. Among the popular examples are the logarithmic loss, which is minimized by the logistic regression classifier (McCullagh and Nelder, 1989), and the hinge loss, which is minimized by the support vector machine (SVM) (Boser et al., 1992; Cortes and Vapnik, 1995).
Nov-20-2019
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- California (0.04)
- District of Columbia > Washington (0.04)
- Illinois > Cook County
- Chicago (0.04)
- New Jersey > Mercer County
- Princeton (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- Wisconsin (0.04)
- Asia > Middle East
- Genre:
- Research Report
- Experimental Study (0.34)
- New Finding (0.48)
- Research Report