On the Pinsker bound of inner product kernel regression in large dimensions

Lu, Weihao, Ding, Jialin, Zhang, Haobo, Lin, Qian

arXiv.org Machine Learning 

This intriguing phenomenon, where the two asymptotics are equal, was rigorously justified by the seminal work on Le Cam equivalence. These work established the asymptotic equivalence between Gaussian sequence models, the white noise model, and certain nonparametric regression models (see, e.g., [3, 4, 5]). Since then, subsequent studies have established similar exact risks for a variety of nonparametric estimation problems. These include density estimation, regression models with non-Gaussian noise or random designs, analysis of Besov bodies, and wavelet estimation (e.g., [6, 7, 8, 2, 9, 10, 11, 12, 13]). For a detailed review of these developments, one can refer to [14] and the references therein. Constants akin to β(m, R), now often referred to as the Pinsker constant, play an indispensable role in studying the super-efficiency phenomenon observed in nonparametric problems. This phenomenon has been the subject of extensive investigation (e.g., [15, 16, 17, 18]). Recently, the strong theoretical links between the training dynamics within wide neural networks and the corresponding neural tangent kernel in regression have motivated substantial research into understanding the performance of spectral algorithms, such as kernel ridge regression and kernel gradient descent, in the context of kernel regression problems (see, e.g., [19, 20, 21, 22, 23, 24]).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found