When is dataset cartography ineffective? Using training dynamics does not improve robustness against Adversarial SQuAD
–arXiv.org Artificial Intelligence
In this paper, I investigate the effectiveness of dataset cartography for extractive question answering on the SQuAD dataset. I begin by analyzing annotation artifacts in SQuAD and evaluate the impact of two adversarial datasets, AddSent and AddOneSent, on an ELECTRA-small model. Using training dynamics, I partition SQuAD into easy-to-learn, ambiguous, and hard-to-learn subsets. I then compare the performance of models trained on these subsets to those trained on randomly selected samples of equal size. Results show that training on cartography-based subsets does not improve generalization to the SQuAD validation set or the AddSent adversarial set. While the hard-to-learn subset yields a slightly higher F1 score on the AddOneSent dataset, the overall gains are limited. These findings suggest that dataset cartography provides little benefit for adversarial robustness in SQuAD-style QA tasks. I conclude by comparing these results to prior findings on SNLI and discuss possible reasons for the observed differences.
arXiv.org Artificial Intelligence
Mar-23-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Denmark > Capital Region
- Copenhagen (0.04)
- Italy > Tuscany
- Florence (0.05)
- Portugal > Lisbon
- Lisbon (0.04)
- Belgium > Brussels-Capital Region
- North America > United States
- California
- San Francisco County > San Francisco (0.04)
- Santa Clara County > Santa Clara (0.04)
- Colorado (0.05)
- Delaware > New Castle County
- Middletown (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Texas > Travis County
- Austin (0.28)
- California
- Pacific Ocean > North Pacific Ocean
- San Francisco Bay (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Leisure & Entertainment > Sports > Football (1.00)
- Technology: