Goto

Collaborating Authors

 expressing probability distribution


A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

Neural Information Processing Systems

This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution $\pi$ and a source distribution $p_z$ both defined on $\mathbb{R}^d$, we prove under some assumptions that there exists a deep neural network $g:\mathbb{R}^d\gt \mathbb{R}$ with ReLU activation such that the push-forward measure $(\nabla g)_\# p_z$ of $p_z$ under the map $\nabla g$ is arbitrarily close to the target measure $\pi$. The closeness are measured by three classes of integral probability metrics between probability distributions: $1$-Wasserstein distance, maximum mean distance (MMD) and kernelized Stein discrepancy (KSD). We prove upper bounds for the size (width and depth) of the deep neural network in terms of the dimension $d$ and the approximation error $\varepsilon$ with respect to the three discrepancies. In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially. Our proof relies on convergence estimates of empirical measures under aforementioned discrepancies and semi-discrete optimal transport.


Review for NeurIPS paper: A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

Neural Information Processing Systems

Weaknesses: I see this paper as a positive, but I have the following unclear points. Is it possible to describe the number of weights needed for the network for the approximation? Some important approximation capability papers investigates a relation btw a number of their weights and the approximation power. How does this affect them? Is it possible to give a similar rate if there is no density in the distribution?


Review for NeurIPS paper: A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

Neural Information Processing Systems

This paper shows that the gradients of certain ResNets can serve as generators to produce any of a broad class of distributions, measuring quality in several different metrics, including empirical measures. Pushing forward the gradient of a network rather than the network itself is somewhat unusual, and the paper requires a latent dimension the same size as the ambient dimension of the target distribution. Nevertheless, the proof is satisfying, explicit, and clear. This paper makes a nice contribution to the theory of generative models.


A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

Neural Information Processing Systems

This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution \pi and a source distribution p_z both defined on \mathbb{R} d, we prove under some assumptions that there exists a deep neural network g:\mathbb{R} d\gt \mathbb{R} with ReLU activation such that the push-forward measure ( abla g)_\# p_z of p_z under the map abla g is arbitrarily close to the target measure \pi . The closeness are measured by three classes of integral probability metrics between probability distributions: 1 -Wasserstein distance, maximum mean distance (MMD) and kernelized Stein discrepancy (KSD). We prove upper bounds for the size (width and depth) of the deep neural network in terms of the dimension d and the approximation error \varepsilon with respect to the three discrepancies. In particular, the size of neural network can grow exponentially in d when 1 -Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on d at most polynomially.