Feature Adaptation for Sparse Linear Regression Jonathan A. Kelner MIT Frederic Koehler
–Neural Information Processing Systems
Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian N (0, Σ), and we seek an estimator with small excess risk. If the true signal is t -sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only O (t log n) samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the condition number of Σ . Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates. We provide a polynomial-time algorithm that, given Σ, automatically adapts the Lasso to tolerate a small number of approximate dependencies. In particular, we achieve near-optimal sample complexity for constant sparsity and if Σ has few "outlier" eigenvalues. Our algorithm fits into a broader framework of feature adaptation for sparse linear regression with ill-conditioned covariates. With this framework, we additionally provide the first polynomial-factor improvement over brute-force search for constant sparsity t and arbitrary covariance Σ .
Neural Information Processing Systems
Oct-8-2025, 18:58:59 GMT
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > Santa Clara County
- Asia > Middle East
- Technology: