random feature regression
Asymptotics of Bayesian Uncertainty Estimation in Random Features Regression
In this paper we compare and contrast the behavior of the posterior predictive distribution to the risk of the the maximum a posteriori estimator for the random features regression model in the overparameterized regime. We will focus on the variance of the posterior predictive distribution (Bayesian model average) and compare its asymptotics to that of the risk of the MAP estimator. In the regime where the model dimensions grow faster than any constant multiple of the number of samples, asymptotic agreement between these two quantities is governed by the phase transition in the signal-to-noise ratio. They also asymptotically agree with each other when the number of samples grow faster than any constant multiple of model dimensions. Numerical simulations illustrate finer distributional properties of the two quantities for finite dimensions. We conjecture they have Gaussian fluctuations and exhibit similar properties as found by previous authors in a Gaussian sequence model, this is of independent theoretical interest.
1fd09c5f59a8ff35d499c0ee25a1d47e-AuthorFeedback.pdf
We wish to thank the reviewers for their valuable feedback. Despite its simplicity, the setup we consider is rich enough to capture the two kinds of overfitting as desired. An explanation for this counter-intuitive behavior is provided in appendix B. We will clarify this point. A more thorough investigation of the impact of the structure of the dataset is left for future work.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- (6 more...)
Dimension-free deterministic equivalents and scaling laws for random feature regression
In this work we investigate the generalization performance of random feature ridge regression (RFRR). Our main contribution is a general deterministic equivalent for the test error of RFRR. Specifically, under a certain concentration property, we show that the test error is well approximated by a closed-form expression that only depends on the feature map eigenvalues. Notably, our approximation guarantee is non-asymptotic, multiplicative, and independent of the feature map dimension---allowing for infinite-dimensional features. We expect this deterministic equivalent to hold broadly beyond our theoretical analysis, and we empirically validate its predictions on various real and synthetic datasets. As an application, we derive sharp excess error rates under standard power-law assumptions of the spectrum and target decay.
Review for NeurIPS paper: Understanding Double Descent Requires A Fine-Grained Bias-Variance Decomposition
Additional Feedback: This paper analyzes "double descent" phenomenon, which is when the generalization error of a model peaks at the interpolation threshold (as a function either of model complexity or of sample size). The authors develop a fine-grained bias-variance decomposition which decomposes the risk into the bias and several different variance terms. They apply this decomposition to the random features regression model and show which of these terms lead to divergence. This paper addresses an important issue that has lately been focus of much research. It suggests "fine-grained" bias-variance decomposition that allows to clarify several subtle effects.
Asymptotics of Bayesian Uncertainty Estimation in Random Features Regression
In this paper we compare and contrast the behavior of the posterior predictive distribution to the risk of the the maximum a posteriori estimator for the random features regression model in the overparameterized regime. We will focus on the variance of the posterior predictive distribution (Bayesian model average) and compare its asymptotics to that of the risk of the MAP estimator. In the regime where the model dimensions grow faster than any constant multiple of the number of samples, asymptotic agreement between these two quantities is governed by the phase transition in the signal-to-noise ratio. They also asymptotically agree with each other when the number of samples grow faster than any constant multiple of model dimensions. Numerical simulations illustrate finer distributional properties of the two quantities for finite dimensions.
Deep Regression Ensembles
Didisheim, Antoine, Kelly, Bryan, Malamud, Semyon
We introduce a methodology for designing and training deep neural networks (DNN) that we call "Deep Regression Ensembles" (DRE). It bridges the gap between DNN and two-layer neural networks trained with random feature regression. Each layer of DRE has two components, randomly drawn input weights and output weights trained myopically (as if the final output layer) using linear ridge regression. Within a layer, each neuron uses a different subset of inputs and a different ridge penalty, constituting an ensemble of random feature ridge regressions. Our experiments show that a single DRE architecture is at par with or exceeds state-of-the-art DNN in many data sets. Yet, because DRE neural weights are either known in closed-form or randomly drawn, its computational cost is orders of magnitude smaller than DNN.
- North America > United States (0.28)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
Covariate Shift in High-Dimensional Random Feature Regression
Tripuraneni, Nilesh, Adlam, Ben, Pennington, Jeffrey
A significant obstacle in the development of robust machine learning models is covariate shift, a form of distribution shift that occurs when the input distributions of the training and test sets differ while the conditional label distributions remain the same. Despite the prevalence of covariate shift in real-world applications, a theoretical understanding in the context of modern machine learning has remained lacking. In this work, we examine the exact high-dimensional asymptotics of random feature regression under covariate shift and present a precise characterization of the limiting test error, bias, and variance in this setting. Our results motivate a natural partial order over covariate shifts that provides a sufficient condition for determining when the shift will harm (or even help) test performance. We find that overparameterized models exhibit enhanced robustness to covariate shift, providing one of the first theoretical explanations for this intriguing phenomenon. Additionally, our analysis reveals an exact linear relationship between in-distribution and out-of-distribution generalization performance, offering an explanation for this surprising recent empirical observation.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- (7 more...)
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
In this paper, we study the two-layer fully connected neural network given by $f(X)=\frac{1}{\sqrt{d_1}}\boldsymbol{a}^\top\sigma\left(WX\right)$, where $X\in\mathbb{R}^{d_0\times n}$ is a deterministic data matrix, $W\in\mathbb{R}^{d_1\times d_0}$ and $\boldsymbol{a}\in\mathbb{R}^{d_1}$ are random Gaussian weights, and $\sigma$ is a nonlinear activation function. We obtain the limiting spectral distributions of two kernel matrices related to $f(X)$: the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear-width regime ($d_1\asymp n$). Under the ultra-width regime $d_1/n\to\infty$, with proper assumptions on $X$ and $\sigma$, a deformed semicircle law appears. Such limiting law is first proved for general centered sample covariance matrices with correlation and then specified for our neural network model. We also prove non-asymptotic concentrations of empirical CK and NTK around their limiting kernel in the spectral norm, and lower bounds on their smallest eigenvalues. As an application, we verify the random feature regression achieves the same asymptotic performance as its limiting kernel regression in ultra-width limit. The limiting training and test errors for random feature regression are calculated by corresponding kernel regression. We also provide a nonlinear Hanson-Wright inequality suitable for neural networks with random weights and Lipschitz activation functions.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)