Generative Autoregressive Networks for 3D Dancing Move Synthesis from Music

Ahn, Hyemin, Kim, Jaehun, Kim, Kihyun, Oh, Songhwai

arXiv.org Machine Learning 

-- This paper proposes a framework which is able to generate a sequence of three-dimensional human dance poses for a given music. The proposed framework consists of three components: a music feature encoder, a pose generator, and a music genre classifier . We focus on integrating these components for generating a realistic 3D human dancing move from music, which can be applied to artificial agents and humanoid robots. The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 5,000 pose frames. Experimental results of generated dance sequences from various songs show how the proposed method generates humanlike dancing move to a given music. In addition, a generated 3D dance sequence is applied to a humanoid robot, showing that the proposed framework can make a robot to dance just by listening to music. Dance is one of the most important form of performing arts that having been emerged in all known cultures. As one of the specific subcategory of under theatrical dance, choreography associated with music is also one of the most popular forms that have usually been designed and physically performed by professional choreographers.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found