Praneeth Netrapalli
Efficient Algorithms for Smooth Minimax Optimization
Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds
Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli
We study the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function. Under the assumption that the objective function satisfies restricted strong convexity (RSC), we analyze orthogonal matching pursuit (OMP), a greedy algorithm that is used heavily in applications, and obtain a support recovery result as well as a tight generalization error bound for the OMP estimator. Further, we show a lower bound for OMP, demonstrating that both our results on support recovery and generalization error are tight up to logarithmic factors. To the best of our knowledge, these are the first such tight upper and lower bounds for any sparse regression algorithm under the RSC assumption.
Efficient Algorithms for Smooth Minimax Optimization
Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent
Chi Jin, Sham M. Kakade, Praneeth Netrapalli
Matrix completion, where we wish to recover a low rank matrix by observing a few entries from it, is a widely studied problem in both theory and practice with wide applications. Most of the provable algorithms so far on this problem have been restricted to the offline setting where they provide an estimate of the unknown matrix using all observations simultaneously. However, in many applications, the online version, where we observe one entry at a time and dynamically update our estimate, is more appealing. While existing algorithms are efficient for the offline setting, they could be highly inefficient for the online setting. In this paper, we propose the first provable, efficient online algorithm for matrix completion. Our algorithm starts from an initial estimate of the matrix and then performs non-convex stochastic gradient descent (SGD). After every observation, it performs a fast update involving only one row of two tall matrices, giving near linear total runtime. Our algorithm can be naturally used in the offline setting as well, where it gives competitive sample complexity and runtime to state of the art algorithms. Our proofs introduce a general framework to show that SGD updates tend to stay away from saddle surfaces and could be of broader interests to other non-convex problems.
Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds
Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli
We study the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function. Under the assumption that the objective function satisfies restricted strong convexity (RSC), we analyze orthogonal matching pursuit (OMP), a greedy algorithm that is used heavily in applications, and obtain a support recovery result as well as a tight generalization error bound for the OMP estimator. Further, we show a lower bound for OMP, demonstrating that both our results on support recovery and generalization error are tight up to logarithmic factors. To the best of our knowledge, these are the first such tight upper and lower bounds for any sparse regression algorithm under the RSC assumption.