The new weapon in the fight against biased algorithms: Bug bounties

#artificialintelligence 

When it comes to detecting bias in algorithms, researchers are trying to learn from the information security field – and particularly, from the bug bounty-hunting hackers who comb through software code to identify potential security vulnerabilities. The parallels between the work of these security researchers and the hunt for possible flaws in AI models, in fact, is at the heart of the work carried out by Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation. Presenting the research she has been carrying out with advocacy group the Algorithmic Justice League (AJL) during the annual Mozilla Festival, Raji explained how along with her team, she has been studying bug bounty programs to see how they could be applied to the detection of a different type of nuisance: algorithmic bias. SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic) Bug bounties, which reward hackers for discovering vulnerabilities in software code before malicious actors exploit them, have become an integral part of the information security field. Major companies such as Google, Facebook or Microsoft now all run bug bounty programs; the number of these hackers is multiplying, and so are the financial rewards that corporations are ready to pay to fix software problems before malicious hackers find them.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found