Goto

Collaborating Authors

 musiclm


Efficient Neural Music Generation

Neural Information Processing Systems

Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge.In this paper, we present M




Efficient Neural Music Generation

Neural Information Processing Systems

Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge.In this paper, we present MeLoDy (M for music; L for LM; D for diffusion), an LM-guided diffusion model that generates music audios of state-of-the-art quality meanwhile reducing 95.7\% to 99.6\% forward passes in MusicLM, respectively, for sampling 10s to 30s music. MeLoDy inherits the highest-level LM from MusicLM for semantic modeling, and applies a novel dual-path diffusion (DPD) model and an audio VAE-GAN to efficiently decode the conditioning semantic tokens into waveform. DPD is proposed to simultaneously model the coarse and fine acoustics by incorporating the semantic information into segments of latents effectively via cross-attention at each denoising step.


MusicRL: Aligning Music Generation to Human Preferences

Cideron, Geoffrey, Girgin, Sertan, Verzetti, Mauro, Vincent, Damien, Kastelic, Matej, Borsos, Zalán, McWilliams, Brian, Ungureanu, Victor, Bachem, Olivier, Pietquin, Olivier, Geist, Matthieu, Hussenot, Léonard, Zeghidour, Neil, Agostinelli, Andrea

arXiv.org Artificial Intelligence

We propose MusicRL, the first music generation system finetuned from human feedback. Appreciation of text-to-music models is particularly subjective since the concept of musicality as well as the specific intention behind a caption are user-dependent (e.g. a caption such as "upbeat work-out music" can map to a retro guitar solo or a techno pop beat). Not only this makes supervised training of such models challenging, but it also calls for integrating continuous human feedback in their post-deployment finetuning. MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards. We design reward functions related specifically to text-adherence and audio quality with the help from selected raters, and use those to finetune MusicLM into MusicRL-R. We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences. Using Reinforcement Learning from Human Feedback (RLHF), we train MusicRL-U, the first text-to-music model that incorporates human feedback at scale. Human evaluations show that both MusicRL-R and MusicRL-U are preferred to the baseline. Ultimately, MusicRL-RU combines the two approaches and results in the best model according to human raters. Ablation studies shed light on the musical attributes influencing human preferences, indicating that text adherence and quality only account for a part of it. This underscores the prevalence of subjectivity in musical appreciation and calls for further involvement of human listeners in the finetuning of music generation models.


Brain2Music: Reconstructing Music from Human Brain Activity

Denk, Timo I., Takagi, Yu, Matsuyama, Takuya, Agostinelli, Andrea, Nakai, Tomoya, Frank, Christian, Nishimoto, Shinji

arXiv.org Artificial Intelligence

The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. In this paper, we introduce a method for reconstructing music from brain activity, captured using functional magnetic resonance imaging (fMRI). Our approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. We investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, we discuss which brain regions represent information derived from purely textual descriptions of music stimuli. We provide supplementary material including examples of the reconstructed music at https://google-research.github.io/seanet/brain2music


Meta's open-source MusicGen AI uses text to create song genre mashups

Engadget

Meta's Audiocraft research team has just released MusicGen, an open source deep learning language model that can generate new music based on text prompts and even be aligned to an existing song, The Decoder reported. It's much like ChatGPT for audio, letting you describe the style of music you want, drop in an existing tune (optionally) and then clicking "Generate." After a good chunk of time (around 160 seconds in my case), it spits out a short piece of all-new music based on your text prompts and melody. The demo on Facebook's Hugging Face AI site lets you describe your music, providing a handful of examples like "an 80s driving pop song with heavy drums and synth pads in the background." You can then "condition" that on a given song up top 30 seconds long, with controls letting select a specific portion of that.


Efficient Neural Music Generation

Lam, Max W. Y., Tian, Qiao, Li, Tang, Yin, Zongyu, Feng, Siyuan, Tu, Ming, Ji, Yuliang, Xia, Rui, Ma, Mingbo, Song, Xuchen, Chen, Jitong, Wang, Yuping, Wang, Yuxuan

arXiv.org Artificial Intelligence

Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge. In this paper, we present MeLoDy (M for music; L for LM; D for diffusion), an LM-guided diffusion model that generates music audios of state-of-the-art quality meanwhile reducing 95.7% or 99.6% forward passes in MusicLM, respectively, for sampling 10s or 30s music. MeLoDy inherits the highest-level LM from MusicLM for semantic modeling, and applies a novel dual-path diffusion (DPD) model and an audio VAE-GAN to efficiently decode the conditioning semantic tokens into waveform. DPD is proposed to simultaneously model the coarse and fine acoustics by incorporating the semantic information into segments of latents effectively via cross-attention at each denoising step. Our experimental results suggest the superiority of MeLoDy, not only in its practical advantages on sampling speed and infinitely continuable generation, but also in its state-of-the-art musicality, audio quality, and text correlation. Our samples are available at https://Efficient-MeLoDy.github.io/.


Try Google's new AI music service for yourself: Here's how

PCWorld

You've heard of AI art, and AI chatbots like ChatGPT and Bing. Now you can try out AI music, compliments of Google's MusicLM. Originally announced in January, MusicLM is now available for you to play with, compliments of what Google calls its AI Test Kitchen. Though you can jump directly to the MusicLM site to try it out, Google may throw up a popup requiring you to sign up for AI Test Kitchen first. You'll need to provide a Gmail address, and agree that whatever prompts you provide may be reviewed by human members of Google's team, though anonymized.


Google opens up access to its text-to-music AI

Engadget

AI-generated music has been in the spotlight lately, between a track that seemingly featured vocals from Drake and The Weeknd gaining traction to Spotify reportedly removing thousands of songs over concerns that people were using them to game the system. Now, Google is wading further into that space as the company is opening up access to its text-to-music AI, which is called MusicLM. Google detailed the system back in January when it published research on MusicLM. The generative AI landscape has shifted dramatically this year, however, and now Google feels comfortable enough to let the public try MusicLM. "We've been working with musicians like Dan Deacon and hosting workshops to see how this technology can empower the creative process," Google Research product manager Hema Manickavasagam and Google Labs product manager Kristin Yim wrote in a blog post.