Silicon Valley Pretends That Algorithmic Bias Is Accidental. It's Not.

Slate 

In late June, the MIT Technology Review reported on the ways that some of the world's largest job search sites--including LinkedIn, Monster, and ZipRecruiter--have attempted to eliminate bias in their artificial intelligence job-interview software. These remedies came after incidents in which A.I. video-interviewing software was found to discriminate against people with disabilities that affect facial expression and exhibit bias against candidates identified as women. When artificial intelligence software produces differential and unequal results for marginalized groups along lines such as race, gender, and socioeconomic status, Silicon Valley rushes to acknowledge the errors, apply technical fixes, and apologize for the differential outcomes. We saw this when Twitter apologized after its image-cropping algorithm was shown to automatically focus on white faces over Black ones and when TikTok expressed contrition for a technical glitch that suppressed the Black Lives Matter hashtag. They claim that these incidents are unintentional moments of unconscious bias or bad training data spilling over into an algorithm--that the bias is a bug, not a feature. But the fact that these incidents continue to occur across products and companies suggests that discrimination against marginalized groups is actually central to the functioning of technology.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found