Goto

Collaborating Authors

 Praneeth Netrapalli


Efficient Algorithms for Smooth Minimax Optimization

Neural Information Processing Systems

In terms of g(, y), we consider two settings - strongly convex and nonconvex - and improve upon the best known rates in both. For strongly-convex g(, y), y, we propose a new direct optimal algorithm combining Mirror-Prox and Nesterov's AGD, and show that it can find global optimum in ร• (1/k


Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds

Neural Information Processing Systems

We study the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function. Under the assumption that the objective function satisfies restricted strong convexity (RSC), we analyze orthogonal matching pursuit (OMP), a greedy algorithm that is used heavily in applications, and obtain a support recovery result as well as a tight generalization error bound for the OMP estimator. Further, we show a lower bound for OMP, demonstrating that both our results on support recovery and generalization error are tight up to logarithmic factors. To the best of our knowledge, these are the first such tight upper and lower bounds for any sparse regression algorithm under the RSC assumption.


Efficient Algorithms for Smooth Minimax Optimization

Neural Information Processing Systems

In terms of g(, y), we consider two settings - strongly convex and nonconvex - and improve upon the best known rates in both. For strongly-convex g(, y), y, we propose a new direct optimal algorithm combining Mirror-Prox and Nesterov's AGD, and show that it can find global optimum in ร• (1/k


Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent

Neural Information Processing Systems

Matrix completion, where we wish to recover a low rank matrix by observing a few entries from it, is a widely studied problem in both theory and practice with wide applications. Most of the provable algorithms so far on this problem have been restricted to the offline setting where they provide an estimate of the unknown matrix using all observations simultaneously. However, in many applications, the online version, where we observe one entry at a time and dynamically update our estimate, is more appealing. While existing algorithms are efficient for the offline setting, they could be highly inefficient for the online setting. In this paper, we propose the first provable, efficient online algorithm for matrix completion. Our algorithm starts from an initial estimate of the matrix and then performs non-convex stochastic gradient descent (SGD). After every observation, it performs a fast update involving only one row of two tall matrices, giving near linear total runtime. Our algorithm can be naturally used in the offline setting as well, where it gives competitive sample complexity and runtime to state of the art algorithms. Our proofs introduce a general framework to show that SGD updates tend to stay away from saddle surfaces and could be of broader interests to other non-convex problems.


Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds

Neural Information Processing Systems

We study the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function. Under the assumption that the objective function satisfies restricted strong convexity (RSC), we analyze orthogonal matching pursuit (OMP), a greedy algorithm that is used heavily in applications, and obtain a support recovery result as well as a tight generalization error bound for the OMP estimator. Further, we show a lower bound for OMP, demonstrating that both our results on support recovery and generalization error are tight up to logarithmic factors. To the best of our knowledge, these are the first such tight upper and lower bounds for any sparse regression algorithm under the RSC assumption.