Understanding Cross-Validation part2(Machine Learning)

#artificialintelligence 

Abstract: We derive high-dimensional Gaussian comparison results for the standard V-fold cross-validated risk estimates. Our result combines a recent stability-based argument for the low-dimensional central limit theorem of cross-validation with the high-dimensional Gaussian comparison framework for sums of independent random variables. These results give new insights into the joint sampling distribution of cross-validated risks in the context of model comparison and tuning parameter selection, where the number of candidate models and tuning parameters can be larger than the fitting sample size. Abstract: In this article we prove that estimator stability is enough to show that leave-one-out cross validation is a sound procedure, by providing concentration bounds in a general framework. In particular, we provide concentration bounds beyond Lipschitz continuity assumptions on the loss or on the estimator.