A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression

Cheng, Tin Sum, Lucchi, Aurelien, Dokmanić, Ivan, Kratsios, Anastasis, Belius, David

arXiv.org Machine Learning 

Generalization is a central theme in statistical learning theory. The recent renewed interest in kernel methods, especially in Kernel Ridge Regression (KRR), is largely due to the fact that deep neural network (DNN) training can be approximated using kernels under appropriate conditions Jacot et al. [2018], Arora et al. [2019], Bordelon et al. [2020], in which the test error is more tractable analytically and thus enjoys stronger theoretical guarantees. However, many prior results have been derived under conditions incompatible with practical settings. For instance Liang and Rakhlin [2020], Liu et al. [2021a], Mei et al. [2021], Misiakiewicz [2022] give asymptotic bounds on the KRR test error, which requires the input dimension d to tend to infinity. In reality, the input dimension of the data set and the target function is typically finite.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found