Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup
Yordanov, Yordan, Kocijan, Vid, Lukasiewicz, Thomas, Camburu, Oana-Maria
–arXiv.org Artificial Intelligence
Training a model to provide natural language explanations (NLEs) for its predictions usually requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task. In this work, we examine the setup in which the child task has few NLEs but abundant labels. We establish four few-shot transfer learning methods that cover the possible fine-tuning combinations of the labels and NLEs for the parent and child tasks. We transfer explainability from a large natural language inference dataset (e-SNLI) separately to two child tasks: (1) hard cases of pronoun resolution, where we introduce the small-e-WinoGrande dataset of NLEs on top of the WinoGrande dataset, and (2)~commonsense validation (ComVE). Our results demonstrate that the parent task helps with NLE generation and we establish the best methods for this setup.
arXiv.org Artificial Intelligence
Oct-22-2022
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- Pacific Ocean (0.05)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.54)
- Technology: