Towards Text Generation with Adversarially Learned Neural Outlines

Subramanian, Sandeep, Mudumba, Sai Rajeswar, Sordoni, Alessandro, Trischler, Adam, Courville, Aaron C., Pal, Chris

Neural Information Processing Systems 

Recent progress in deep generative models has been fueled by two paradigms -- autoregressive and adversarial models. We propose a combination of both approaches with the goal of learning generative models of text. Our method first produces a high-level sentence outline and then generates words sequentially, conditioning on both the outline and the previous outputs. We generate outlines with an adversarial model trained to approximate the distribution of sentences in a latent space induced by general-purpose sentence encoders. This provides strong, informative conditioning for the autoregressive stage.