Goto

Collaborating Authors

 text-to-image synthesis model


StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation

Maharana, Adyasha, Hannan, Darryl, Bansal, Mohit

arXiv.org Artificial Intelligence

Recent advances in text-to-image synthesis have led to large pretrained transformers with excellent capabilities to generate visualizations from a given text. However, these models are ill-suited for specialized tasks like story visualization, which requires an agent to produce a sequence of images given a corresponding sequence of captions, forming a narrative. Moreover, we find that the story visualization task fails to accommodate generalization to unseen plots and characters in new narratives. Hence, we first propose the task of story continuation, where the generated visual story is conditioned on a source image, allowing for better generalization to narratives with new characters. Then, we enhance or 'retro-fit' the pretrained text-to-image synthesis models with task-specific modules for (a) sequential image generation and (b) copying relevant elements from an initial frame. Then, we explore full-model finetuning, as well as prompt-based tuning for parameter-efficient adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset DiDeMoSV collected from a video-captioning dataset. We also develop a model StoryGANc based on Generative Adversarial Networks (GAN) for story continuation, and compare it with the StoryDALL-E model to demonstrate the advantages of our approach. We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image, thereby improving continuity in the generated visual story. Finally, our analysis suggests that pretrained transformers struggle to comprehend narratives containing several characters. Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation.


🍱 The Text-to-Image Synthesis Revolution

#artificialintelligence

Next week, we will start a new series about text-to-image synthesis models. In the last year, this deep learning discipline has seen an astonishing level of progress. You probably heard about OpenAI DALL-E 2, but plenty of other impressive text-to-image generation models have been created in the last few months. We have seen Google coming up with models like Imagen and Parti; Meta has done amazing work with Make-A-Scene; OpenAI created GLIDE and, of course, DALL-E 2. All these models push the boundaries of text-to-image synthesis in ways that challenge human imagination. However, the innovation is not only coming from the big AI labs but also from startups in the space.