What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
Peng, Andi, Nushi, Besmira, Kiciman, Emre, Inkpen, Kori, Suri, Siddharth, Kamar, Ece
–arXiv.org Artificial Intelligence
What Y ou See Is What Y ou Get? Abstract Although systematic biases in decision-making are widely documented, the ways in which they emerge from different sources is less understood. We present a controlled experimental platform to study gender bias in hiring by decoupling the effect of world distribution (the gender breakdown of candidates in a specific profession) from bias in human decision-making. We explore the effectiveness of representation criteria, fixed proportional display of candidates, as an intervention strategy for mitigation of gender bias by conducting experiments measuring human decision-makers' rankings for who they would recommend as potential hires. Experiments across professions with varying gender proportions show that balancing gender representation in candidate slates can correct biases for some professions where the world distribution is skewed, although doing so has no impact on other professions where human persistent preferences are at play. We show that the gender of the decision-maker, complexity of the decision-making task and over-and under-representation of genders in the candidate slate can all impact the final decision. By decoupling sources of bias, we can better isolate strategies for bias mitigation in human-in-the-loop systems. Introduction Machine learning can aid decision-making and is used in recommendation systems that play increasingly prevalent roles in the world. We now deploy systems to help hire candidates (HireVue 2018), determine who to police more (V eale, V an Kleek, and Binns 2018), and assess the likelihood of an individual to recidivate on a crime (Angwin et al. 2016). Because these systems are trained on real world data, they often produce biased decision outcomes in a manner that is discriminatory against underrepresented groups. Systems have been found to unfairly discriminate against defendants of color in assessing bail (Angwin et al. 2016), incorrectly classify minority groups in facial recognition tasks (Raji and Buolamwini 2019), and engage in wage theft for honest workers (McInnis et al. 2016). While much of the algorithmic fairness literature has focused on understanding bias from algorithms in isolation (Dwork and Ilvento 2018),Copyright c null 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). A biased decision can be impacted by world, algorithmic, and human bias.
arXiv.org Artificial Intelligence
Sep-8-2019
- Country:
- North America > United States (0.46)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Banking & Finance (0.68)
- Health & Medicine > Therapeutic Area (1.00)
- Law (1.00)
- Technology: