Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues

#artificialintelligence 

The artificial intelligence community needs to begin developing the vocabulary to define and clearly explain the harms the technology can cause, in order to reign in abuses with facial biometrics, AI Now Institute Technology Fellow Deb Raji argues in a TWIML AI podcast. The podcast on "How External Auditing is Changing the Facial Recognition Landscape with Deb Raji," hosted by Sam Charrington, who asks about the genesis of the audits Raji and colleagues have performed of biometric facial recognition systems, industry response, and the ethical way forward. Raji describes her journey through academia and an internship with Clarifai to taking up the cause of algorithmic bias and connecting with Joy Buolamwini after watching her TedTalk. The work Raji did with others in the community gained prominence with Gender Shades, and concepts that emerged from that and similar projects have been built into engineering practices at Google. Facial recognition is characterized as "very immature technology," which was exposed as not working by the Gender Shades study. "It really sort of stemmed from this desire to…identify the problem in a consistent way and communicate it in a consistent way," Raji says of the early work delineating the problem of demographic differentials in facial recognition.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found