Do AI systems need to come with safety warnings?

MIT Technology Review 

Considering how powerful AI systems are, and the roles they increasingly play in helping to make high-stakes decisions about our lives, homes, and societies, they receive surprisingly little formal scrutiny. That's starting to change, thanks to the blossoming field of AI audits. When they work well, these audits allow us to reliably check how well a system is working and figure out how to mitigate any possible bias or harm. Famously, a 2018 audit of commercial facial recognition systems by AI researchers Joy Buolamwini and Timnit Gebru found that the system didn't recognize darker-skinned people as well as white people. For dark-skinned women, the error rate was up to 34%. As AI researcher Abeba Birhane points out in a new essay in Nature, the audit "instigated a body of critical work that has exposed the bias, discrimination, and oppressive nature of facial-analysis algorithms."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found