Improving Limited Labeled Dialogue State Tracking with Self-Supervision
Wu, Chien-Sheng, Hoi, Steven, Xiong, Caiming
–arXiv.org Artificial Intelligence
Existing dialogue state tracking (DST) models require plenty of labeled data. However, collecting high-quality labels is costly, especially when the number of domains increases. In this paper, we address a practical DST problem that is rarely discussed, i.e., learning efficiently with limited labeled data. We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior. We encourage a DST model to have consistent latent distributions given a perturbed input, making it more robust to an unseen scenario. We also add an auxiliary utterance generation task, modeling a potential correlation between conversational behavior and dialogue states. The experimental results show that our proposed self-supervised signals can improve joint goal accuracy by 8.95\% when only 1\% labeled data is used on the MultiWOZ dataset. We can achieve an additional 1.76\% improvement if some unlabeled data is jointly trained as semi-supervised learning. We analyze and visualize how our proposed self-supervised signals help the DST task and hope to stimulate future data-efficient DST research.
arXiv.org Artificial Intelligence
Oct-26-2020
- Country:
- Asia
- China > Hong Kong (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Europe > Italy
- North America
- Canada (0.04)
- United States
- California > Santa Clara County
- Palo Alto (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- California > Santa Clara County
- Asia
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Consumer Products & Services (1.00)
- Transportation (0.69)