Subspace Diffusion Generative Models
Jing, Bowen, Corso, Gabriele, Berlinghieri, Renato, Jaakkola, Tommi
–arXiv.org Artificial Intelligence
Score-based models generate samples by mapping noise to data (and vice versa) via a high-dimensional diffusion process. We question whether it is necessary to run this entire process at high dimensionality and incur all the inconveniences thereof. Instead, we restrict the diffusion via projections onto subspaces as the data distribution evolves toward noise. When applied to state-of-the-art models, our framework simultaneously improves sample quality -- reaching an FID of 2.17 on unconditional CIFAR-10 -- and reduces the computational cost of inference for the same number of denoising steps. Our framework is fully compatible with continuous-time diffusion and retains its flexible capabilities, including exact log-likelihoods and controllable generation. Code is available at https://github.com/bjing2016/subspace-diffusion.
arXiv.org Artificial Intelligence
Feb-27-2023
- Genre:
- Research Report > Promising Solution (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Generation (0.41)
- Vision (0.68)
- Information Technology > Artificial Intelligence