Goto

Collaborating Authors

 non-convex burer-monteiro approach work


The non-convex Burer-Monteiro approach works on smooth semidefinite programs

Neural Information Processing Systems

Semidefinite programs (SDP's) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDP's with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDP's which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer-Monteiro formulation of SDP's in that class almost never has any spurious local optima.


Reviews: The non-convex Burer-Monteiro approach works on smooth semidefinite programs

Neural Information Processing Systems

This paper considers a specific class of semi-definite programs (minimize linear objective subject to linear equally and positive semi-definiteness constraints). This convex optimization problem is replaced by a non convex one where the optimization variable is replaced by the outer product of a matrix and its transpose. The paper shows that under certain conditions (size of the factorization large enough. More specifically, the paper is very closely related to prior work by Burer and Monteiro. I think [1] proposed the change of variables X YY T, proposed also an BFGS augmented Lagrangian algorithm for solving the nonconvex problem, and showed experimentally that it reliably gives a global minimum if number of columns of Y is large enough.


The non-convex Burer-Monteiro approach works on smooth semidefinite programs

Boumal, Nicolas, Voroninski, Vlad, Bandeira, Afonso

Neural Information Processing Systems

Semidefinite programs (SDP's) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDP's with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDP's which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations.