On the capacity of deep generative networks for approximating distributions

Yang, Yunfei, Li, Zhen, Wang, Yang

arXiv.org Machine Learning 

We study the efficacy and efficiency of deep generative networks for approximating probability distributions. We prove that neural networks can transform a one-dimensional source distribution to a distribution that is arbitrarily close to a high-dimensional target distribution in Wasserstein distances. Upper bounds of the approximation error are obtained in terms of neural networks' width and depth. It is shown that the approximation error grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution. On the contrary, when $f$-divergences are used as metrics of distributions, the approximation property is different. We prove that in order to approximate the target distribution in $f$-divergences, the dimension of the source distribution cannot be smaller than the intrinsic dimension of the target distribution. Therefore, $f$-divergences are less adequate than Waserstein distances as metrics of distributions for generating samples.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found