Mathematics of Computing: Overviews
Riemannian Langevin Algorithm for Solving Semidefinite Programs
Li, Mufan Bill, Erdogdu, Murat A.
We propose a Langevin diffusion-based algorithm for non-convex optimization and sampling on a product manifold of spheres. Under a logarithmic Sobolev inequality, we establish a guarantee for finite iteration convergence to the Gibbs distribution in terms of Kullback-Leibler divergence. We show that with an appropriate temperature choice, the suboptimality gap to the global minimum is guaranteed to be arbitrarily small with high probability. As an application, we analyze the proposed Langevin algorithm for solving the Burer-Monteiro relaxation of a semidefinite program (SDP). In particular, we establish a logarithmic Sobolev inequality for the Burer-Monteiro problem when there are no spurious local minima; hence implying a fast escape from saddle points. Combining the results, we then provide a global optimality guarantee for the SDP and the Max-Cut problem. More precisely, we show the Langevin algorithm achieves $\epsilon$-multiplicative accuracy with high probability in $\widetilde{\Omega}( n^2 \epsilon^{-3} )$ iterations, where $n$ is the size of the cost matrix.
Adaptive surrogate models for parametric studies
The computational effort for the evaluation of numerical simulations based on e.g. the finite-element method is high. Metamodels can be utilized to create a low-cost alternative. However the number of required samples for the creation of a sufficient metamodel should be kept low, which can be achieved by using adaptive sampling techniques. In this Master thesis adaptive sampling techniques are investigated for their use in creating metamodels with the Kriging technique, which interpolates values by a Gaussian process governed by prior covariances. The Kriging framework with extension to multifidelity problems is presented and utilized to compare adaptive sampling techniques found in the literature for benchmark problems as well as applications for contact mechanics. This thesis offers the first comprehensive comparison of a large spectrum of adaptive techniques for the Kriging framework. Furthermore a multitude of adaptive techniques is introduced to multifidelity Kriging as well as well as to a Kriging model with reduced hyperparameter dimension called partial least squares Kriging. In addition, an innovative adaptive scheme for binary classification is presented and tested for identifying chaotic motion of a Duffing's type oscillator.
Optimal Transport on Discrete Domains
Inspired by the matching of supply to demand in logistical problems, the optimal transport (or Monge--Kantorovich) problem involves the matching of probability distributions defined over a geometric domain such as a surface or manifold. In its most obvious discretization, optimal transport becomes a large-scale linear program, which typically is infeasible to solve efficiently on triangle meshes, graphs, point clouds, and other domains encountered in graphics and machine learning. Recent breakthroughs in numerical optimal transport, however, enable scalability to orders-of-magnitude larger problems, solvable in a fraction of a second. Here, we discuss advances in numerical optimal transport that leverage understanding of both discrete and smooth aspects of the problem. State-of-the-art techniques in discrete optimal transport combine insight from partial differential equations (PDE) with convex analysis to reformulate, discretize, and optimize transportation problems. The end result is a set of theoretically-justified models suitable for domains with thousands or millions of vertices. Since numerical optimal transport is a relatively new discipline, special emphasis is placed on identifying and explaining open problems in need of mathematical insight and additional research.
Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments
Yang, Jiyan, Meng, Xiangrui, Mahoney, Michael W.
In this era of large-scale data, distributed systems built on top of clusters of commodity hardware provide cheap and reliable storage and scalable processing of massive data. Here, we review recent work on developing and implementing randomized matrix algorithms in large-scale parallel and distributed environments. Randomized algorithms for matrix problems have received a great deal of attention in recent years, thus far typically either in theory or in machine learning applications or with implementations on a single machine. Our main focus is on the underlying theory and practical implementation of random projection and random sampling algorithms for very large very overdetermined (i.e., overconstrained) $\ell_1$ and $\ell_2$ regression problems. Randomization can be used in one of two related ways: either to construct sub-sampled problems that can be solved, exactly or approximately, with traditional numerical methods; or to construct preconditioned versions of the original full problem that are easier to solve with traditional iterative algorithms. Theoretical results demonstrate that in near input-sparsity time and with only a few passes through the data one can obtain very strong relative-error approximate solutions, with high probability. Empirical results highlight the importance of various trade-offs (e.g., between the time to construct an embedding and the conditioning quality of the embedding, between the relative importance of computation versus communication, etc.) and demonstrate that $\ell_1$ and $\ell_2$ regression problems can be solved to low, medium, or high precision in existing distributed systems on up to terabyte-sized data.