Learning to Generate Rigid Body Interactions with Video Diffusion Models
Romero, David, Bermudez, Ariana, Li, Hao, Pizzati, Fabio, Laptev, Ivan
–arXiv.org Artificial Intelligence
Recent video generation models have achieved remarkable progress and are now deployed in film, social media production, and advertising. Beyond their creative potential, such models also hold promise as world simulators for robotics and embodied decision making. Despite strong advances, however, current approaches still struggle to generate physically plausible object interactions and lack object-level control mechanisms. To address these limitations, we introduce KineMask, an approach for video generation that enables realistic rigid body control, interactions, and effects. Given a single image and a specified object velocity, our method generates videos with inferred motions and future object interactions. We propose a two-stage training strategy that gradually removes future motion supervision via object masks. Using this strategy we train video diffusion models (VDMs) on synthetic scenes of simple interactions and demonstrate significant improvements of object interactions in real scenes. Furthermore, KineMask integrates low-level motion control with high-level textual conditioning via predicted scene descriptions, leading to support for synthesis of complex dynamical phenomena. Our experiments show that KineMask achieves strong improvements over recent models of comparable size. Ablation studies further highlight the complementary roles of low- and high-level conditioning in VDMs. Our code, model, and data will be made publicly available. Project Page: https://daromog.github.io/KineMask/
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia > Middle East
- Saudi Arabia > Northern Borders Province
- Arar (0.04)
- UAE (0.04)
- Saudi Arabia > Northern Borders Province
- North America > United States (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: