Deep generative models as the probability transformation functions

Bondar, Vitalii, Babenko, Vira, Trembovetskyi, Roman, Korobeinyk, Yurii, Dzyuba, Viktoriya

arXiv.org Artificial Intelligence 

This paper introduces a unified theoretical perspective that views deep generative models as probability transformation functions. Despite the apparent differences in architecture and training methodologies among various types of generative models - autoencoders, autoregressive models, generative adversarial networks, normalizing flows, diffusion models, and flow matching - we demonstrate that they all fundamentally operate by transforming simple predefined distributions into complex target data distributions. This unifying perspective facilitates the transfer of methodological improvements between model architectures and provides a foundation for developing universal theoretical approaches, potentially leading to more efficient and effective generative modeling techniques.