Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

Lee, Jihyun, Lee, Chaebin, Kim, Yunsu, Lee, Gary Geunbae

arXiv.org Artificial Intelligence 

In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for fewshot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving augmentation (PPaug) to prevent overfitting. We increase the few-shot (10%) performance by approximately 4% on Figure 1: Dialogue example of DST dataset and its belief MultiWOZ 2.1 (Eric et al., 2019) and enhances state. The underlined part of the dialogue is the the slot-recall 8.34% for unseen values compared value of the belief state and has specific information to baseline.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found