Reviews: Learning Distributions Generated by One-Layer ReLU Networks

Neural Information Processing Systems 

A popular generative model these days is as follows: pass a standard Gaussian noise through a neural network. But a major unanswered question is what is the structure of the resulting distribution? Given samples from such a distribution, can we learn the distribution parameters? This question is the topic of this paper. Specifically, consider a 1-layer ReLU neural network, which is specified by a matrix W and a real bias b.