Direct Optimization of Margins Improves Generalization in Combined Classifiers
Mason, Llew, Bartlett, Peter L., Baxter, Jonathan
–Neural Information Processing Systems
The dark curve is AdaBoost, the light curve is DOOM. DOOM sacrifices significant training error forimproved test error (horizontal markson margin 0 line)_ 1 Introduction Many learning algorithms for pattern classification minimize some cost function of the training data, with the aim of minimizing error (the probability of misclassifying an example). One example of such a cost function is simply the classifier's error on the training data.
Neural Information Processing Systems
Dec-31-1999
- Country:
- North America > United States > California (0.14)
- Industry:
- Health & Medicine (0.48)
- Technology: