Latent Diffusion for Language Generation
–Neural Information Processing Systems
Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view diffusion and existing language models as complementary. We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders. We then demonstrate that continuous diffusion models can be learned in the latent space of the language autoencoder, enabling us to sample continuous latent representations that can be decoded into natural language with the pretrained decoder.
Neural Information Processing Systems
Feb-12-2025, 02:05:33 GMT
- Country:
- Asia (1.00)
- Europe > United Kingdom (0.92)
- North America > United States
- Genre:
- Research Report (0.46)
- Industry:
- Energy (0.93)
- Government > Regional Government
- Europe Government (0.67)
- North America Government > United States Government (1.00)
- Health & Medicine > Therapeutic Area (0.67)
- Law (1.00)
- Leisure & Entertainment > Sports (1.00)
- Media (0.68)
- Technology: