Microsoft Researcher Details Real-World Dangers Of Algorithm Bias

#artificialintelligence 

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday. In Crawford's keynote, she presented a fascinating breakdown of different types of harms done by algorithmic biases. As she explained, the word "bias" has a mathematically specific definition in machine learning, usually referring to errors in estimation or over/under representing populations when sampling. Less discussed is bias in terms of the disparate impact machine learning might have on different populations. "An allocative harm is when a system allocates or withholds a certain opportunity or resource," she began.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found