A Simple Tactic That Could Help Reduce Bias in AI
That's an emerging conclusion of research-based findings -- including my own -- that could lead to AI-enabled decision-making systems being less subject to bias and better able to promote equality. This is a critical possibility, given our growing reliance on AI-based systems to render evaluations and decisions in high-stakes human contexts, in everything from court decisions, to hiring, to access to credit, and more. It's been well-established that AI-driven systems are subject to the biases of their human creators -- we unwittingly "bake" biases into systems by training them on biased data or with "rules" created by experts with implicit biases. Consider the Allegheny Family Screening Tool (AFST), an AI-based system predicting the likelihood a child is in an abusive situation using data from the same-named Pennsylvania county's Department of Human Services -- including records from public agencies related to child welfare, drug and alcohol services, housing, and others. Caseworkers use reports of potential abuse from the community, along with whatever publicly-available data they can find for the family involved, to run the model, which predicts a risk score from 1 to 20; a sufficiently high score triggers an investigation.
Nov-5-2020, 16:10:50 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.05)
- North America > United States
- California (0.05)
- Pennsylvania (0.25)
- Europe > United Kingdom
- Industry:
- Technology: