AI bias is rampant. Bug bounties could help catch it.

#artificialintelligence 

The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s. Back then, some companies found they could actually make themselves safer by incentivizing the work of independent "white hat" security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That's how the practice of bug bounties became a cornerstone of cybersecurity today. In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in -- this time, by putting bounties on harms that might originate in their artificial intelligence systems. François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate's probe of Russia's attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and "combines art and research to illuminate the social implications and harms of artificial intelligence."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found