On a class of geodesically convex optimization problems solved via Euclidean MM methods
–arXiv.org Artificial Intelligence
We study geodesically convex (g-convex) problems that can be written as a difference of Euclidean convex functions. This structure arises in several optimization problems in statistics and machine learning, e.g., for matrix scaling, M-estimators for covariances, and Brascamp-Lieb inequalities. Our work offers efficient algorithms that on the one hand exploit g-convexity to ensure global optimality along with guarantees on iteration complexity. On the other hand, the split structure permits us to develop Euclidean Majorization-Minorization algorithms that help us bypass the need to compute expensive Riemannian operations such as exponential maps and parallel transport. We illustrate our results by specializing them to a few concrete optimization problems that have been previously studied in the machine learning literature. Ultimately, we hope our work helps motivate the broader search for mixed Euclidean-Riemannian optimization algorithms
arXiv.org Artificial Intelligence
Oct-20-2022
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- California > Los Angeles County
- Los Angeles (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- California > Los Angeles County
- Canada > Quebec
- Europe > United Kingdom
- Genre:
- Research Report (0.70)
- Industry:
- Education (0.34)
- Technology: