Taming Diffusion Probabilistic Models for Character Control
Chen, Rui, Shi, Mingyi, Huang, Shaoli, Tan, Ping, Komura, Taku, Chen, Xuelin
–arXiv.org Artificial Intelligence
We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals. At the heart of our method lies a transformer-based Conditional Autoregressive Motion Diffusion Model (CAMDM), which takes as input the character's historical motion and can generate a range of diverse potential future motions conditioned on high-level, coarse user control. To meet the demands for diversity, controllability, and computational efficiency required by a real-time controller, we incorporate several key algorithmic designs. These include separate condition tokenization, classifier-free guidance on past motion, and heuristic future trajectory extension, all designed to address the challenges associated with taming motion diffusion probabilistic models for character control. As a result, our work represents the first model that enables real-time generation of high-quality, diverse character animations based on user interactive control, supporting animating the character in multiple styles with a single unified model. We evaluate our method on a diverse set of locomotion skills, demonstrating the merits of our method over existing character controllers. Project page and source codes: https://aiganimation.github.io/CAMDM/
arXiv.org Artificial Intelligence
Apr-23-2024
- Country:
- Asia > China (0.16)
- North America > United States (0.15)
- Genre:
- Research Report (0.64)
- Technology: