Masked Audio Generation using a Single Non-Autoregressive Transformer
Ziv, Alon, Gat, Itai, Lan, Gael Le, Remez, Tal, Kreuk, Felix, Défossez, Alexandre, Copet, Jade, Synnaeve, Gabriel, Adi, Yossi
–arXiv.org Artificial Intelligence
T, a masked generative sequence modeling method that operates directly over several streams of audio tokens. T is comprised of a single-stage, non-autoregressive transformer. During training, we predict spans of masked tokens obtained from a masking scheduler, while during inference we gradually construct the output sequence using several decoding steps. T, which will be then used for later decoding steps. T, in which we fuse between autoregressive and non-autoregressive models to generate the first few seconds in an autoregressive manner while the rest of the sequence is being decoded in parallel. T for the task of text-to-music and text-to-audio generation and conduct an extensive empirical evaluation, considering both objective metrics and human studies. The proposed approach is comparable to the evaluated baselines, while being significantly faster (x7 faster than the autoregressive baseline). Samples are available on our demo page https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT Recent developments in self-supervised representation learning (Hsu et al., 2021; Défossez et al., 2022), sequence modeling (Touvron et al., 2023; Rozière et al., 2023), and audio synthesis (Lee et al., 2022; Polyak et al., 2021) allow a great leap in performance when considering high quality conditional audio generation. Recently, Défossez et al. (2022); Zeghidour et al. (2021) proposed to apply a VQ-VAE directly on the raw waveform using residual vector quantization to obtain a multi-stream discrete representation of the audio signal. Later on, Kreuk et al. (2022a); Wang et al. (2023); Zhang et al. (2023); Copet et al. (2023); Kreuk et al. (2022b) presented a conditional language modeling on such audio signals representations. In parallel, Schneider et al. (2023); Huang et al. (2023b); Liu et al. (2023a) proposed training a conditional diffusion-based generative model operating on learned continuous representations of the audio signal obtained from a pre-trained auto-encoder model. Work was done as part of Alon's internship at FAIR.
arXiv.org Artificial Intelligence
Jan-9-2024
- Genre:
- Research Report (1.00)
- Industry:
- Media (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (0.88)
- Speech (1.00)
- Vision (0.94)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence