DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
Gu, Xiaodong, Cho, Kyunghyun, Ha, Jung-Woo, Kim, Sunghun
–arXiv.org Artificial Intelligence
Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. Neural response generation has been a long interest of natural language research. Most of the recent approaches to data-driven conversation modeling primarily build upon sequence-to-sequence learning (Cho et al., 2014; Sutskever et al., 2014). Previous research has demonstrated that sequenceto-sequence conversation models often suffer from the safe response problem and fail to generate meaningful, diverse on-topic responses (Li et al., 2015; Sato et al., 2017).
arXiv.org Artificial Intelligence
Feb-25-2019
- Country:
- Asia
- North America > United States
- New York (0.04)
- Genre:
- Research Report
- New Finding (0.34)
- Promising Solution (0.34)
- Research Report
- Technology: