Pre-training Multi-party Dialogue Models with Latent Discourse Inference
Li, Yiyang, Huang, Xinting, Bi, Wei, Zhao, Hai
–arXiv.org Artificial Intelligence
Multi-party dialogues are more difficult for models to understand than one-to-one two-party dialogues, since they involve multiple interlocutors, resulting in interweaving reply-to relations and information flows. To step over these obstacles, an effective way is to pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying. However, due to the lack of explicitly annotated discourse labels in multi-party dialogue corpora, previous works fail to scale up the pre-training process by putting aside the unlabeled multi-party conversational data for nothing. To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model by unsupervised latent variable inference methods. Experiments on multiple downstream tasks show that our pre-trained model outperforms strong baselines by large margins and achieves state-of-the-art (SOTA) results, justifying the effectiveness of our method. The official implementation of this paper is available at https://github.com/EricLee8/MPD_EMVI.
arXiv.org Artificial Intelligence
May-24-2023
- Country:
- Asia > China (0.28)
- Europe > Austria (0.28)
- North America
- Canada (0.28)
- United States > Minnesota (0.28)
- Genre:
- Research Report (0.64)
- Technology: