Goto

Collaborating Authors

 flamingo





49ad23d1ec9fa4bd8d77d02681df5cfa-Supplemental.pdf

Neural Information Processing Systems

Compute isessential tomodern machine learning applications, andmorecompute typically yields better results. It is thus important to compare our method's compute requirements to competing methods. Table 10: Training compute requirements for our diffusion models compared to StyleGAN2 and BigGAN-deep. Underreasonablesettingsforβt andT,thedistribution q(xT) is nearly an isotropic Gaussian distribution, so samplingxT is trivial. In particular, they do not directly parameterizeµθ(xt,t) as a neural network,butinsteadtrainamodel ϵθ(xt,t)topredictϵfromEquation3.


Flamingo: a Visual Language Model for Few-Shot Learning

Neural Information Processing Systems

Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.


Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment

Neural Information Processing Systems

Text-conditioned image generation models often generate incorrect associations between entities and their visual attributes. This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image. As one example, a query like ``a pink sunflower and a yellow flamingo'' may incorrectly produce an image of a yellow sunflower and a pink flamingo. To remedy this issue, we propose SynGen, an approach which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax. Specifically, we encourage large overlap between attention maps of entities and their modifiers, and small overlap with other entities and modifier words. The loss is optimized during inference, without retraining or fine-tuning the model. Human evaluation on three datasets, including one new and challenging set, demonstrate significant improvements of SynGen compared with current state of the art methods. This work highlights how making use of sentence structure during inference can efficiently and substantially improve the faithfulness of text-to-image generation.



Russia's Putin hails war advances; Ukraine retakes parts of Donetsk

Al Jazeera

How is Russia replenishing its military? What is a'coalition of the willing'? How China forgot promises and'debts' to Ukraine How are Europe, the US pulling apart on Ukraine? Russia's Putin hails war advances; Ukraine retakes parts of Donetsk John Psaropoulos is an independent journalist based in Athens and has been Al Jazeera's correspondent in Southeast Europe since 2012. Ukraine reclaimed 62sq km (24sq miles) of territory last month, its commander in chief revealed on Monday, contradicting Russian President Vladimir Putin's recent claim to be advancing "in all directions".