Distillation of a tractable model from the VQ-VAE
Hadžić, Armin, Papez, Milan, Pevný, Tomáš
–arXiv.org Artificial Intelligence
Deep generative models with discrete latent space, such as the Vector-Quantized Variational Autoencoder (VQ-VAE), offer excellent data generation capabilities, but, due to the large size of their latent space, their probabilistic inference is deemed intractable. We demonstrate that the VQ-VAE can be distilled into a tractable model by selecting a subset of latent variables with high probabilities. This simple strategy is particularly efficient, especially if the VQ-VAE underutilizes its latent space, which is, indeed, very often the case. We frame the distilled model as a probabilistic circuit, and show that it preserves expressiveness of the VQ-VAE while providing tractable probabilistic inference. Experiments illustrate competitive performance in density estimation and conditional generation tasks, challenging the view of the VQ-VAE as an inherently intractable model.
arXiv.org Artificial Intelligence
Sep-3-2025
- Country:
- Asia > China (0.04)
- Europe
- Czechia > Prague (0.04)
- Middle East > Malta
- Port Region > Southern Harbour District > Floriana (0.04)
- Genre:
- Research Report (1.00)
- Technology: