Scaled Least Squares Estimator for GLMs in Large-Scale Problems

Neural Information Processing Systems 

We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations n is much larger than the number of predictors p, i.e. n\gg p \gg 1 . We show that in GLMs with random (not necessarily Gaussian) design, the GLM coefficients are approximately proportional to the corresponding ordinary least squares (OLS) coefficients. Using this relation, we design an algorithm that achieves the same accuracy as the maximum likelihood estimator (MLE) through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of \mathcal{O}(p) . We provide theoretical guarantees for our algorithm, and analyze the convergence behavior in terms of data dimensions.