Computational and Statistical Asymptotic Analysis of the JKO Scheme for Iterative Algorithms to update distributions

Wu, Shang, Wang, Yazhen

arXiv.org Machine Learning 

The seminal work of Jordan, Kinderlehrer, and Otto [33] developed what is now widely known as the JKO scheme, a foundational method for generating iterative algorithms to compute distributions and reshaping our understanding of sampling algorithms. The JKO scheme can be interpreted as a gradient flow of the free energy with respect to the Wasserstein metric, often referred to as the Wasserstein gradient flow. This interpretation has led to significant advancements in machine learning, including applications in reinforcement learning to solve policy-distribution optimization problems [55]. While the JKO scheme traditionally assumes that the underlying model is fully known, in this paper, we relax this assumption by allowing models with unknown parameters. We develop statistical approaches to estimate these parameters and adapt the JKO scheme to work with the estimated values. Specifically, Langevin equations--stochastic differential equations--play a key role in describing the evolution of physical systems, facilitating stochastic gradient descent in machine learning, and enabling Markov chain Monte Carlo (MCMC) simulations in numerical computing. For examples and detailed discussions, see [11, 8, 51, 22, 19, 39, 43]. Solutions to Langevin equations, known as Langevin diffusions, are stochastic processes whose distributions evolve according to the Fokker-Planck equations [27, 48].