Your Model is Overconfident, and Other Lies We Tell Ourselves
Mickus, Timothee, Sinha, Aman, Vázquez, Raúl
–arXiv.org Artificial Intelligence
The difficulty intrinsic to a given example, rooted in its inherent ambiguity, is a key yet often overlooked factor in evaluating neural NLP models. We investigate the interplay and divergence among various metrics for assessing intrinsic difficulty, including annotator dissensus, training dynamics, and model confidence. Through a comprehensive analysis using 29 models on three datasets, we reveal that while correlations exist among these metrics, their relationships are neither linear nor monotonic. By disentangling these dimensions of uncertainty, we aim to refine our understanding of data complexity and its implications for evaluating and improving NLP models.
arXiv.org Artificial Intelligence
Mar-3-2025
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe > Switzerland
- North America
- Mexico > Mexico City (0.14)
- United States
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: