Choice Fusion as Knowledge for Zero-Shot Dialogue State Tracking

Su, Ruolin, Yang, Jingfeng, Wu, Ting-Wei, Juang, Biing-Hwang

arXiv.org Artificial Intelligence 

Nowadays, the requirements of deploying an increasing number of services across a variety of domains raise challenges With the demanding need for deploying dialogue systems in to DST models in production [4]. However, existing new domains with less cost, zero-shot dialogue state tracking dialogue datasets only span a few domains, making it impossible (DST), which tracks user's requirements in task-oriented dialogues to train a DST model upon all conceivable conversation without training on desired domains, draws attention flows [5]. Furthermore, dialogue systems are required to infer increasingly. Although prior works have leveraged questionanswering dialogue states with dynamic techniques and offer diverse (QA) data to reduce the need for in-domain training interfaces for different services. Despite the fact that the copy in DST, they fail to explicitly model knowledge transfer mechanism [6] or dialogue acts [7] are leveraged to efficiently and fusion for tracking dialogue states. To address this issue, track slots and values in the dialogue history, the performance we propose CoFunDST, which is trained on domain-agnostic of DST still relies on a large number of annotations of dialogue QA datasets and directly uses candidate choices of slot-values states, which is expensive and inefficient to collect data as knowledge for zero-shot dialogue-state generation, based for every new domain and service.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found