lattice basis reduction
High Dimensional Linear Regression using Lattice Basis Reduction
We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector \beta^* from n noisy linear observations Y=X \beta^ . Instead we adopt a regularization based on assuming that the underlying vectors \beta^* have rational entries with the same denominator Q. We call this Q-rationality assumption. We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector \beta^* for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n=1). Furthermore, we prove that in the case of the Gaussian white noise for W, n=o(p/\log p) and Q sufficiently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise.
Reviews: High Dimensional Linear Regression using Lattice Basis Reduction
The paper presents a novel method of exactly recovering a vector of coefficients in high-dimensional linear regression, with high probability as the dimension goes to infinity. The method assumes that the correct coefficients come from a finite discrete set of bounded rational values, but it does not - as is commonplace - assume that the coefficient vector is sparse. To achieve this, the authors extend a classical algorithm for lattice basis reduction. Crucially, this approach does not require the sample size to grow with the dimension, thus in certain cases the algorithm is able to recover the exact coefficient vector from just a single sample (with the dimension sufficiently large). A novel connection between high-dimensional linear regression and lattice basis reduction is the main strength of the paper.