Feature Adaptation for Sparse Linear Regression
–Neural Information Processing Systems
Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian N(0,\Sigma), and we seek an estimator with small excess risk. If the true signal is t -sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only O(t\log n) samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the *condition number* of \Sigma . Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates.We provide a polynomial-time algorithm that, given \Sigma, automatically adapts the Lasso to tolerate a small number of approximate dependencies.
Neural Information Processing Systems
Jan-18-2025, 18:22:02 GMT
- Technology: