Weak Learners and Improved Rates of Convergence in Boosting

Mannor, Shie, Meir, Ron

Neural Information Processing Systems 

We present an algorithm that produces a linear classifier that is guaranteed to achieve an error better than random guessing for any distribution on the data. While this weak learner is not useful for learning in general, we show that under reasonable conditions on the distribution it yields an effective weak learner for one-dimensional problems. Preliminary simulations suggest that similar behavior can be expected in higher dimensions, a result which is corroborated by some recent theoretical bounds. Additionally, weprovide improved convergence rate bounds for the generalization errorin situations where the empirical error can be made small, which is exactly the situation that occurs if weak learners with guaranteed performance that is better than random guessing can be established.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found