TempoControl: Temporal Attention Guidance for Text-to-Video Models
Schiber, Shira, Lindenbaum, Ofir, Schwartz, Idan
–arXiv.org Artificial Intelligence
Recent advances in generative video models have enabled the creation of high-quality videos based on natural language prompts. However, these models frequently lack fine-grained temporal control, meaning they do not allow users to specify when particular visual elements should appear within a generated sequence. In this work, we introduce TempoControl, a method that allows for temporal alignment of visual concepts during inference, without requiring retraining or additional supervision. TempoControl utilizes cross-attention maps, a key component of text-to-video diffusion models, to guide the timing of concepts through a novel optimization approach. Our method steers attention using three complementary principles: aligning its temporal pattern with a control signal (correlation), adjusting its strength where visibility is required (magnitude), and preserving semantic consistency (entropy). TempoControl provides precise temporal control while maintaining high video quality and diversity. We demonstrate its effectiveness across various applications, including temporal reordering of single and multiple objects, action timing, and audio-aligned video generation. Please see our project page for more details: https://shira-schiber.github.io/TempoControl/.
arXiv.org Artificial Intelligence
Dec-8-2025
- Country:
- Asia > Middle East > Israel (0.04)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Leisure & Entertainment > Sports (0.47)
- Transportation (0.70)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.46)
- Statistical Learning (0.68)
- Natural Language > Large Language Model (0.46)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence