Direct Optimization of Margins Improves Generalization in Combined Classifiers

Mason, Llew, Bartlett, Peter L., Baxter, Jonathan

Neural Information Processing Systems 

The dark curve is AdaBoost, the light curve is DOOM. DOOM sacrifices significant training error forimproved test error (horizontal markson margin 0 line)_ 1 Introduction Many learning algorithms for pattern classification minimize some cost function of the training data, with the aim of minimizing error (the probability of misclassifying an example). One example of such a cost function is simply the classifier's error on the training data.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found