Improving QA Model Performance with Cartographic Inoculation

Chen, Allen, Tanrikulu, Okan

arXiv.org Artificial Intelligence 

QA models are faced with complex and openended contextual reasoning problems, but can often learn well-performing solution heuristics by exploiting dataset-specific patterns in their training data. These patterns, or "dataset artifacts", reduce the model's ability to generalize to real-world QA problems. Utilizing an ElectraSmallDiscriminator model trained for QA, we analyze the impacts and incidence of dataset artifacts using an adversarial challenge set designed to confuse models reliant on artifacts for prediction. Extending existing work on methods for mitigating artifact impacts, we propose cartographic inoculation, a novel method that fine-tunes models on an optimized subset of the challenge data to reduce model reliance on dataset artifacts. We show Figure 1: Visualization depicting the inoculation by that by selectively fine-tuning a model on ambiguous fine-tuning method and potential outcomes, figure adversarial examples from a challenge adapted from Liu et al. (2019) set, significant performance improvements can be made on the full challenge dataset with minimal loss of model generalizability to other