The Chosen One: Consistent Characters in Text-to-Image Diffusion Models
Avrahami, Omri, Hertz, Amir, Vinker, Yael, Arar, Moab, Fruchter, Shlomi, Fried, Ohad, Cohen-Or, Daniel, Lischinski, Dani
–arXiv.org Artificial Intelligence
Recent advances in text-to-image generation models have unlocked vast potential for visual creativity. However, these models struggle with generation of consistent characters, a crucial aspect for numerous real-world applications such as story visualization, game development asset design, advertising, and more. Current methods typically rely on multiple pre-existing images of the target character or involve labor-intensive manual processes. In this work, we propose a fully automated solution for consistent character generation, with the sole input being a text prompt. We introduce an iterative procedure that, at each stage, identifies a coherent set of images sharing a similar identity and extracts a more consistent identity from this set. Our quantitative analysis demonstrates that our method strikes a better balance between prompt alignment and identity consistency compared to the baseline methods, and these findings are reinforced by a user study. To conclude, we showcase several practical applications of our approach. Project page is available at https://omriavrahami.com/the-chosen-one
arXiv.org Artificial Intelligence
Nov-27-2023
- Country:
- Asia > Middle East > Israel (0.14)
- Genre:
- Questionnaire & Opinion Survey (0.70)
- Research Report (1.00)
- Industry:
- Information Technology > Software (0.34)
- Media (0.68)
- Technology: