Tang, Tianwen
MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking
Tang, Tianwen, Zhu, Tong, Liu, Haodong, Bai, Yin, Cheng, Jia, Chen, Wenliang
Previous zero-shot DST models mainly suffer from domain transferring and partial prediction problems. To address these challenges, we propose Mixture of Prefix Experts (MoPE) to establish connections between similar slots in different domains, which strengthens the model transfer performance in unseen domains. Empirical results demonstrate that MoPE-DST achieves the joint goal accuracy of 57.13% on MultiWOZ2.1 and 55.40% on SGD.