Interpreting Indirect Answers to Yes-No Questions in Multiple Languages

Wang, Zijie, Hossain, Md Mosharaf, Mathur, Shivam, Melo, Terry Cruz, Ozler, Kadir Bulut, Park, Keun Hee, Quintero, Jacob, Rezaei, MohammadHossein, Shakya, Shreya Nupur, Uddin, Md Nayem, Blanco, Eduardo

arXiv.org Artificial Intelligence 

Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data. We also demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). Experimental results demonstrate that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found