Goto

Collaborating Authors

Boosting the Area under the ROC Curve

Neural Information Processing Systems

We show that any weak ranker that can achieve an area under the ROC curve slightly better than 1/2 (which can be achieved by random guessing) can be efficiently boosted to achieve an area under the ROC curve arbitrarily close to 1. We further show that this boosting can be performed even in the presence of independent misclassification noise, given access to a noise-tolerant weak ranker.


Boosting the Area under the ROC Curve

Neural Information Processing Systems

We show that any weak ranker that can achieve an area under the ROC curve slightly better than 1/2 (which can be achieved by random guessing) can be efficiently boostedto achieve an area under the ROC curve arbitrarily close to 1. We further show that this boosting can be performed even in the presence of independent misclassificationnoise, given access to a noise-tolerant weak ranker.


Adaptive Martingale Boosting

Neural Information Processing Systems

In recent work Long and Servedio LS05short presented a ``martingale boosting'' algorithm that works by constructing a branching program over weak classifiers and has a simple analysis based on elementary properties of random walks. LS05short showed that this martingale booster can tolerate random classification noise when it is run with a noise-tolerant weak learner; however, a drawback of the algorithm is that it is not adaptive, i.e. it cannot effectively take advantage of variation in the quality of the weak classifiers it receives. In this paper we present a variant of the original martingale boosting algorithm and prove that it is adaptive. This adaptiveness is achieved by modifying the original algorithm so that the random walks that arise in its analysis have different step size depending on the quality of the weak learner at each stage. The new algorithm inherits the desirable properties of the original LS05short algorithm, such as random classification noise tolerance, and has several other advantages besides adaptiveness: it requires polynomially fewer calls to the weak learner than the original algorithm, and it can be used with confidence-rated weak hypotheses that output real values rather than Boolean predictions.


Learning large-margin halfspaces with more malicious noise

Neural Information Processing Systems

We describe a simple algorithm that runs in time poly(n,1/gamma,1/eps) and learns an unknown n-dimensional gamma-margin halfspace to accuracy 1-eps in the presence of malicious noise, when the noise rate is allowed to be as high as Theta(eps gamma sqrt(log(1/gamma))). Previous efficient algorithms could only learn to accuracy eps in the presence of malicious noise of rate at most Theta(eps gamma). Our algorithm does not work by optimizing a convex loss function. We show that no algorithm for learning gamma-margin halfspaces that minimizes a convex proxy for misclassification error can tolerate malicious noise at a rate greater than Theta(eps gamma); this may partially explain why previous algorithms could not achieve the higher noise tolerance of our new algorithm.


AUC Optimization vs. Error Rate Minimization

Neural Information Processing Systems

The area under an ROC curve (AUC) is a criterion used in many applications to measure the quality of a classification algorithm. However, the objective function optimized in most of these algorithms is the error rate and not the AUC value. We give a detailed statistical analysis of the relationship between the AUC and the error rate, including the first exact expression of the expected value and the variance of the AUC for a fixed error rate. Our results show that the average AUC is monotonically increasing as a function of the classification accuracy, but that the standard deviation for uneven distributions and higher error rates is noticeable. Thus, algorithms designed to minimize the error rate may not lead to the best possible AUC values. We show that, under certain conditions, the global function optimized by the RankBoost algorithm is exactly the AUC. We report the results of our experiments with RankBoost in several datasets demonstrating the benefits of an algorithm specifically designed to globally optimize the AUC over other existing algorithms optimizing an approximation of the AUC or only locally optimizing the AUC.