Goto

Collaborating Authors

 nullxnull 2




12151_differentially_private_general.pdf

Neural Information Processing Systems

A.3 Low Dimension Before presenting the proof of Theorem 1, we provide formal statements of its Corollaries. We then bound average argument stability in terms of average regret (Lemma 5). Substituting these in the above equation gives the claimed bound. We now fill in the details. Thus, substituting the above in Eqn. ( 3) and substituting the bound from 6, we have, E [ L ( null w; D) L ( w Substituting the value of G completes the proof.


Minimum complexity interpolation in random features models

Celentano, Michael, Misiakiewicz, Theodor, Montanari, Andrea

arXiv.org Machine Learning

Despite their many appealing properties, kernel methods are heavily affected by the curse of dimensionality. For instance, in the case of inner product kernels in $\mathbb{R}^d$, the Reproducing Kernel Hilbert Space (RKHS) norm is often very large for functions that depend strongly on a small subset of directions (ridge functions). Correspondingly, such functions are difficult to learn using kernel methods. This observation has motivated the study of generalizations of kernel methods, whereby the RKHS norm -- which is equivalent to a weighted $\ell_2$ norm -- is replaced by a weighted functional $\ell_p$ norm, which we refer to as $\mathcal{F}_p$ norm. Unfortunately, tractability of these approaches is unclear. The kernel trick is not available and minimizing these norms requires to solve an infinite-dimensional convex problem. We study random features approximations to these norms and show that, for $p>1$, the number of random features required to approximate the original learning problem is upper bounded by a polynomial in the sample size. Hence, learning with $\mathcal{F}_p$ norms is tractable in these cases. We introduce a proof technique based on uniform concentration in the dual, which can be of broader interest in the study of overparametrized models.