Not enough data to create a plot.
Try a different view from the menu above.
yoav Freund
An adaptive nearest neighbor rule for classification
Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran
We introduce a variant of the k-nearest neighbor classifier in which k is chosen adaptively for each query, rather than being supplied as a parameter. The choice of k depends on properties of each neighborhood, and therefore may significantly vary between different points. For example, the algorithm will use larger k for predicting the labels of points in noisy regions. We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, k-NN with an optimal choice of k. In particular, we bound the convergence rate of our classifier in terms of a local quantity we call the "advantage", giving results that are both more general and more accurate than the smoothness-based bounds of earlier nearest neighbor work. Our analysis uses a variant of the uniform convergence theorem of Vapnik-Chervonenkis that is for empirical estimates of conditional probabilities and may be of independent interest.
An adaptive nearest neighbor rule for classification
Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran
We introduce a variant of the k-nearest neighbor classifier in which k is chosen adaptively for each query, rather than being supplied as a parameter. The choice of k depends on properties of each neighborhood, and therefore may significantly vary between different points. For example, the algorithm will use larger k for predicting the labels of points in noisy regions. We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, k-NN with an optimal choice of k. In particular, we bound the convergence rate of our classifier in terms of a local quantity we call the "advantage", giving results that are both more general and more accurate than the smoothness-based bounds of earlier nearest neighbor work. Our analysis uses a variant of the uniform convergence theorem of Vapnik-Chervonenkis that is for empirical estimates of conditional probabilities and may be of independent interest.