Goto

Collaborating Authors

 compositional visual generation


Review for NeurIPS paper: Compositional Visual Generation with Energy Based Models

Neural Information Processing Systems

Weaknesses: * The visual quality/fidelity of the generated images is quite low. Making sure that the visual fidelity on common metrics such as FID matches or is at least close enough to GAN models will be useful to validate that the approach supports high fidelity (as otherwise it may be the case that it achieves compositionality at the expense of lower potential for fine details or high fidelity, as is the case in e.g. Given that there have been many works that explore combinations of properties for CelebA images with GANs, showing that the proposed approach can compete with them is especially important. Showing learning plots as well compared to other types of generative models will be useful. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper.


Compositional Visual Generation with Energy Based Models

Neural Information Processing Systems

A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given a distribution for smiling faces, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts.


Compositional Visual Generation with Composable Diffusion Models

Liu, Nan, Li, Shuang, Du, Yilun, Torralba, Antonio, Tenenbaum, Joshua B.

arXiv.org Artificial Intelligence

Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/