Goto

Collaborating Authors

 subsample


Statistical Guarantees of Distributed Nearest Neighbor Classification

Neural Information Processing Systems

Nearest neighbor is a popular nonparametric method for classification and regression with many appealing properties. In the big data era, the sheer volume and spatial/temporal disparity of big data may prohibit centrally processing and storing the data. This has imposed considerable hurdle for nearest neighbor predictions since the entire training data must be memorized. One effective way to overcome this issue is the distributed learning framework. Through majority voting, the distributed nearest neighbor classifier achieves the same rate of convergence as its oracle version in terms of the regret, up to a multiplicative constant that depends solely on the data dimension. The multiplicative difference can be eliminated by replacing majority voting with the weighted voting scheme. In addition, we provide sharp theoretical upper bounds of the number of subsamples in order for the distributed nearest neighbor classifier to reach the optimal convergence rate. It is interesting to note that the weighted voting scheme allows a larger number of subsamples than the majority voting one. Our findings are supported by numerical studies.


Rates of Convergence for Large-scale Nearest Neighbor Classification

Neural Information Processing Systems

Nearest neighbor is a popular class of classification methods with many desirable properties. For a large data set which cannot be loaded into the memory of a single machine due to computation, communication, privacy, or ownership limitations, we consider the divide and conquer scheme: the entire data set is divided into small subsamples, on which nearest neighbor predictions are made, and then a final decision is reached by aggregating the predictions on subsamples by majority voting. We name this method the big Nearest Neighbor (bigNN) classifier, and provide its rates of convergence under minimal assumptions, in terms of both the excess risk and the classification instability, which are proven to be the same rates as the oracle nearest neighbor classifier and cannot be improved. To significantly reduce the prediction time that is required for achieving the optimal rate, we also consider the pre-training acceleration technique applied to the bigNN method, with proven convergence rate. We find that in the distributed setting, the optimal choice of the neighbor k should scale with both the total sample size and the number of partitions, and there is a theoretical upper limit for the latter. Numerical studies have verified the theoretical findings.




e347c51419ffb23ca3fd5050202f9c3d-AuthorFeedback.pdf

Neural Information Processing Systems

" The algorithm is rather similar to the denoising algorithm referred to in the paper . " The second part of our paper (denoised bigNN) may be viewed as an improvement of the denoising algorithm Indeed, the two algorithms share some similarity. " In the extreme cases... it seems there may be a possibility that " The idea of distributing the data and thus speed up the process of classification seems somewhat inferior to compressing " Given the aim of the paper, I think it's extremely important to experimentally compare against recently " KWS17 is theory oriented and the authors did not provide code. Note our R-based code (to be released publicly) has room for improvement. " Perhaps more discussion/theory on when/why one should use pasting rather than bagging to ensemble k-NN estimators " Separately, depending on the dataset (i.e., the feature space and distribution), I would suspect that even just taking 1 Our proof would work even for only one subsample.