Stop Measuring Calibration When Humans Disagree
Baan, Joris, Aziz, Wilker, Plank, Barbara, Fernández, Raquel
–arXiv.org Artificial Intelligence
Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i.e., its predictive probabilities are a good indication of how likely a prediction is to be correct. Correctness is commonly estimated against the human majority class. Recently, calibration to human majority has been measured on tasks where humans inherently disagree about which class applies. We show that measuring calibration to human majority given inherent disagreements is theoretically problematic, demonstrate this empirically on the ChaosNLI dataset, and derive several instance-level measures of calibration that capture key statistical properties of human judgements - class frequency, ranking and entropy.
arXiv.org Artificial Intelligence
Nov-30-2022
- Country:
- Asia > Middle East
- Jordan (0.14)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Portugal > Lisbon
- Lisbon (0.04)
- Denmark > Capital Region
- North America
- Dominican Republic (0.04)
- United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.04)
- Minnesota > Hennepin County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Sports (0.46)
- Technology: