Negative feedback loops: Using an economic model to inspect bias in AI

#artificialintelligence 

Is bias in AI self-reinforcing? Decision-making systems that impact criminal justice, financial institutions, human resources, and many other areas often have bias. This is especially true of algorithmic systems that learn from historical data, which tends to reflect existing societal biases. In many high-stakes applications, like hiring and lending, these decision-making systems may even reshape the underlying populations. When the system is retrained on future data, it may become not less but more detrimental to historically disadvantaged groups.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found