Neural Wasserstein Gradient Flows for Maximum Mean Discrepancies with Riesz Kernels
Altekrüger, Fabian, Hertrich, Johannes, Steidl, Gabriele
–arXiv.org Artificial Intelligence
In this paper we contribute to For approximating Wasserstein gradient flows for more general the understanding of such flows. We propose functionals, a backward discretization scheme in time, to approximate the backward scheme of Jordan, known as Jordan-Kinderlehrer-Otto (JKO) scheme (Giorgi, Kinderlehrer and Otto for computing such Wasserstein 1993; Jordan et al., 1998) can be used. Its basic idea is to gradient flows as well as a forward scheme discretize the whole flow in time by applying iteratively for so-called Wasserstein steepest descent flows the Wasserstein proximal operator with respect to F. In by neural networks (NNs). Since we cannot restrict case of absolutely continuous measures, Brenier's theorem ourselves to absolutely continuous measures, (Brenier, 1987) can be applied to rewrite this operator via we have to deal with transport plans and velocity transport maps having convex potentials and to learn these plans instead of usual transport maps and velocity transport maps (Fan et al., 2022) or their potentials (Alvarez-fields. Indeed, we approximate the disintegration Melis et al., 2022; Bunne et al., 2022; Mokrov et al., 2021) of both plans by generative NNs which are by neural networks (NNs). In most papers, the objective learned with respect to appropriate loss functions.
arXiv.org Artificial Intelligence
Jun-2-2023
- Country:
- Asia > Middle East
- Jordan (0.64)
- Europe (0.93)
- North America > United States (0.46)
- Asia > Middle East
- Genre:
- Research Report (0.81)
- Technology: