Learning a 1-layer conditional generative model in total variation
–Neural Information Processing Systems
A conditional generative model is a method for sampling from a conditional distribution p(y \mid x) . For example, one may want to sample an image of a cat given the label cat''. A feed-forward conditional generative model is a function g(x, z) that takes the input x and a random seed z, and outputs a sample y from p(y \mid x) . Ideally the distribution of outputs (x, g(x, z)) would be close in total variation to the ideal distribution (x, y) .Generalization bounds for other learning models require assumptions on the distribution of x, even in simple settings like linear regression with Gaussian noise. We show these assumptions are unnecessary in our model, for both linear regression and single-layer ReLU networks.
Neural Information Processing Systems
Jan-19-2025, 13:07:15 GMT
- Technology: