Richard Peng
Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
Deeksha Adil, Richard Peng, Sushant Sachdeva
Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p>3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p>3. We propose p-IRLS, the first IRLS algorithm that provably converges geometrically for any p 2 [2, 1). Our algorithm is simple to implement and is guaranteed to find a high accuracy solution in a sub-linear number of iterations. Our experiments demonstrate that it performs even better than our theoretical bounds, beats the standard Matlab/CVX implementation for solving these problems by 10-50x, and is the fastest among available implementations in the high-accuracy regime.
Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
Deeksha Adil, Richard Peng, Sushant Sachdeva
Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p>3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p>3. We propose p-IRLS, the first IRLS algorithm that provably converges geometrically for any p 2 [2, 1). Our algorithm is simple to implement and is guaranteed to find a high accuracy solution in a sub-linear number of iterations. Our experiments demonstrate that it performs even better than our theoretical bounds, beats the standard Matlab/CVX implementation for solving these problems by 10-50x, and is the fastest among available implementations in the high-accuracy regime.
SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling
Dehua Cheng, Richard Peng, Yan Liu, Ioakeim Perros
Tensor CANDECOMP/PARAFAC (CP) decomposition is a powerful but computationally challenging tool in modern data analytics. In this paper, we show ways of sampling intermediate steps of alternating minimization algorithms for computing low rank tensor CP decompositions, leading to the sparse alternating least squares (SPALS) method. Specifically, we sample the Khatri-Rao product, which arises as an intermediate object during the iterations of alternating least squares. This product captures the interactions between different tensor modes, and form the main computational bottleneck for solving many tensor related tasks. By exploiting the spectral structures of the matrix Khatri-Rao product, we provide efficient access to its statistical leverage scores. When applied to the tensor CP decomposition, our method leads to the first algorithm that runs in sublinear time per-iteration and approximates the output of deterministic alternating least squares algorithms.