Wang, Zijie
Interpreting Answers to Yes-No Questions in Dialogues from Multiple Domains
Wang, Zijie, Rashid, Farzana, Blanco, Eduardo
People often answer yes-no questions without explicitly saying yes, no, or similar polar keywords. Figuring out the meaning of indirect answers is challenging, even for large language models. In this paper, we investigate this problem working with dialogues from multiple domains. We present new benchmarks in three diverse domains: movie scripts, tennis interviews, and airline customer service. We present an approach grounded on distant supervision and blended training to quickly adapt to a new dialogue domain. Experimental results show that our approach is never detrimental and yields F1 improvements as high as 11-34%.
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Wang, Zijie, Hossain, Md Mosharaf, Mathur, Shivam, Melo, Terry Cruz, Ozler, Kadir Bulut, Park, Keun Hee, Quintero, Jacob, Rezaei, MohammadHossein, Shakya, Shreya Nupur, Uddin, Md Nayem, Blanco, Eduardo
Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data. We also demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). Experimental results demonstrate that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages).