BootTOD: Bootstrap Task-oriented Dialogue Representations by Aligning Diverse Responses
Zeng, Weihao, He, Keqing, Wang, Yejie, Fu, Dayuan, Xu, Weiran
–arXiv.org Artificial Intelligence
Pre-trained language models have been successful in many scenarios. However, their usefulness in task-oriented dialogues is limited due to the intrinsic linguistic differences between general text and task-oriented dialogues. Current task-oriented dialogue pre-training methods rely on a contrastive framework, which faces challenges such as selecting true positives and hard negatives, as well as lacking diversity. In this paper, we propose a novel dialogue pre-training model called BootTOD. It learns task-oriented dialogue representations via a self-bootstrapping framework. Unlike contrastive counterparts, BootTOD aligns context and context+response representations and dismisses the requirements of contrastive pairs. BootTOD also uses multiple appropriate response targets to model the intrinsic one-to-many diversity of human conversations. Experimental results show that BootTOD outperforms strong TOD baselines on diverse downstream dialogue tasks.
arXiv.org Artificial Intelligence
Mar-2-2024
- Country:
- Asia > Middle East
- Republic of Türkiye (0.28)
- Europe (0.68)
- North America > United States (1.00)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.88)
- Technology: