StorySync: Training-Free Subject Consistency in Text-to-Image Generation via Region Harmonization
Gaur, Gopalji, Zolfaghari, Mohammadreza, Brox, Thomas
–arXiv.org Artificial Intelligence
Generating a coherent sequence of images that tells a visual story, using text-to-image diffusion models, often faces the critical challenge of maintaining subject consistency across all story scenes. Existing approaches, which typically rely on fine-tuning or retraining models, are computationally expensive, time-consuming, and often interfere with the model's pre-existing capabilities. In this paper, we follow a training-free approach and propose an efficient consistent-subject-generation method. This approach works seamlessly with pre-trained diffusion models by introducing masked cross-image attention sharing to dynamically align subject features across a batch of images, and Regional Feature Harmonization to refine visually similar details for improved subject consistency. Experimental results demonstrate that our approach successfully generates visually consistent subjects across a variety of scenarios while maintaining the creative abilities of the diffusion model.
arXiv.org Artificial Intelligence
Aug-7-2025
- Country:
- Asia > Middle East
- Saudi Arabia > Northern Borders Province > Arar (0.05)
- Europe
- Germany > Baden-Württemberg
- Freiburg (0.40)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Germany > Baden-Württemberg
- North America > Canada
- British Columbia > Vancouver (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.48)
- Technology: