Learning heavy-tailed distributions with Wasserstein-proximal-regularized $\alpha$-divergences

Chen, Ziyu, Gu, Hyemin, Katsoulakis, Markos A., Rey-Bellet, Luc, Zhu, Wei

arXiv.org Machine Learning 

Heavy tails are ubiquitous, emerging in various fields such as extreme events in ocean waves [9], floods [21], social sciences [27, 16], human activities [17, 35], biology [18] and computer sciences [29]. Learning to generate heavy-tailed target distributions has been explored using GANs through tail estimation [10, 15, 1]. While estimating the tail behavior of a heavy-tailed distribution is important, selecting objectives that measure discrepancies between these distributions and facilitate stable learning is equally crucial. In generative modeling, the goal is to generate samples that mimic those from an underlying data distribution, typically by designing algorithms that minimize a probability divergence between the generated and target distributions. Thus, it is crucial to choose a divergence that flexibly and accurately respects the behavior of the data distribution.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found