AI Machine-Learning: In Bias We Trust?
MIT researchers find that the explanation methods designed to help users determine whether to trust a machine-learning model's predictions can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. According to a new study, explanation methods that help users determine whether to trust machine-learning model predictions can be less accurate for disadvantaged subgroups. Machine-learning algorithms are sometimes employed to assist human decision-makers when the stakes are high. For example, a model may predict which law school candidates are most likely to pass the bar exam, assisting admissions officers in deciding which students to admit. Because of the complexity of these models, often having millions of parameters, it is nearly impossible for AI researchers to fully understand how they make predictions.
Jul-6-2022, 14:02:32 GMT
- Country:
- North America > Canada > Ontario > Toronto (0.15)
- Genre:
- Research Report > New Finding (0.35)
- Industry:
- Education
- Curriculum > Subject-Specific Education (0.58)
- Educational Setting > Higher Education (0.74)
- Education
- Technology: