Live Music Models

Lyria Team, null, Caillon, Antoine, McWilliams, Brian, Tarakajian, Cassie, Simon, Ian, Manco, Ilaria, Engel, Jesse, Constant, Noah, Li, Yunpeng, Denk, Timo I., Lalama, Alberto, Agostinelli, Andrea, Huang, Cheng-Zhi Anna, Manilow, Ethan, Brower, George, Erdogan, Hakan, Lei, Heidi, Rolnick, Itai, Grishchenko, Ivan, Orsini, Manu, Kastelic, Matej, Zuluaga, Mauricio, Verzetti, Mauro, Dooley, Michael, Skopek, Ondrej, Ferrer, Rafael, Petridis, Savvas, Borsos, Zalán, Oord, Äaron van den, Eck, Douglas, Collins, Eli, Baldridge, Jason, Hume, Tom, Donahue, Chris, Han, Kehang, Roberts, Adam

arXiv.org Artificial Intelligence 

We introduce a new class of generative models for music called live music models that produce a continuous stream of music in real-time with synchronized user control. We release Magenta RealTime, an open-weights live music model that can be steered using text or audio prompts to control acoustic style. On automatic metrics of music quality, Magenta RealTime outperforms other open-weights music generation models, despite using fewer parameters and offering first-of-its-kind live generation capabilities. We also release Lyria RealTime, an API-based model with extended controls, offering access to our most powerful model with wide prompt coverage. These models demonstrate a new paradigm for AI-assisted music creation that emphasizes human-in-the-loop interaction for live music performance.