First order expansion of convex regularized estimators
–Neural Information Processing Systems
We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function h and the corresponding penalized estimator \hbeta, we construct a quantity \eta, the first order expansion of \hbeta, such that the distance between \hbeta and \eta is an order of magnitude smaller than the estimation error \ \hat{\beta} - \beta *\ . In this sense, the first order expansion \eta can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of \hat{\beta} is asymptotically the same as the risk of \eta which leads to a precise characterization of the MSE of \hbeta; this characterization takes a particularly simple form for isotropic design.
Neural Information Processing Systems
Oct-9-2024, 11:06:53 GMT
- Technology: