Human Comprehension of Fairness in Machine Learning
Saha, Debjani, Schumann, Candice, McElfresh, Duncan C., Dickerson, John P., Mazurek, Michelle L., Tschantz, Michael Carl
–arXiv.org Artificial Intelligence
Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a non-technical audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of one such definition--demographic parity. We validate this metric using online surveys, and study the relationship between comprehension and sentiment, demographics, and the application at hand.
arXiv.org Artificial Intelligence
Dec-16-2019
- Country:
- Europe
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Slovenia > Drava
- North America > United States
- Alaska (0.04)
- California > Alameda County
- Berkeley (0.04)
- Maryland > Prince George's County
- College Park (0.14)
- New York > New York County
- New York City (0.04)
- Europe
- Genre:
- Personal (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report
- Experimental Study (0.69)
- New Finding (0.95)
- Industry:
- Education
- Assessment & Standards (0.54)
- Educational Setting > K-12 Education (0.46)
- Information Technology (0.93)
- Education
- Technology: