MelNet: A Generative Model for Audio in the Frequency Domain
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales that time-domain models have yet to achieve. We apply our model to a variety of audio generation tasks, including unconditional speech generation, music generation, and text-to-speech synthesis---showing improvements over previous approaches in both density estimates and human judgments.
Jun-4-2019
- Country:
- Europe > Italy
- Calabria > Catanzaro Province > Catanzaro (0.04)
- South America > Chile
- Europe > Italy
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment (0.49)
- Media > Music (0.49)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Speech (0.70)
- Information Technology > Artificial Intelligence