Asymptotic Guarantees for Generative Modeling Based on the Smooth Wasserstein Distance
–Neural Information Processing Systems
Minimum distance estimation (MDE) gained recent attention as a formulation of (implicit) generative modeling. It considers minimizing, over model parameters, a statistical distance between the empirical data distribution and the model. This formulation lends itself well to theoretical analysis, but typical results are hindered by the curse of dimensionality. To overcome this and devise a scalable finite-sample statistical MDE theory, we adopt the framework of smooth 1-Wasserstein distance (SWD) \mathsf{W}_1 {(\sigma)} . The SWD was recently shown to preserve the metric and topological structure of classic Wasserstein distances, while enjoying dimension-free empirical convergence rates.
Neural Information Processing Systems
Oct-9-2024, 16:08:10 GMT
- Technology: