Contrastive Error Attribution for Finetuned Language Models
Ladhak, Faisal, Durmus, Esin, Hashimoto, Tatsunori
–arXiv.org Artificial Intelligence
Recent work has identified noisy and misannotated data as a core cause of hallucinations and unfaithful outputs in Natural Language Generation (NLG) tasks. Consequently, identifying and removing these examples is a key open challenge in creating reliable NLG systems. In this work, we introduce a framework to identify and remove low-quality training instances that lead to undesirable outputs, such as faithfulness errors in text summarization. We show that existing approaches for error tracing, such as gradient-based influence measures, do not perform reliably for detecting faithfulness errors in NLG datasets. We overcome the drawbacks of existing error tracing methods through a new, contrast-based estimate that compares undesired generations to human-corrected outputs. Our proposed method can achieve a mean average precision of 0.93 at detecting known data errors across synthetic tasks with known ground truth, substantially outperforming existing approaches. Using this approach and re-training models on cleaned data leads to a 70% reduction in entity hallucinations on the NYT dataset and a 55% reduction in semantic errors on the E2E dataset.
arXiv.org Artificial Intelligence
Jul-11-2023
- Country:
- Asia
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- France (0.04)
- Germany > Saarland
- Saarbrücken (0.04)
- Italy > Campania
- Naples (0.04)
- Norway (0.04)
- United Kingdom
- Belgium > Brussels-Capital Region
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- New York > New York County
- New York City (0.04)
- Ohio (0.04)
- California > Santa Clara County
- Oceania > Australia (0.04)
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment > Sports > Baseball (0.68)
- Technology: