Amazon receives challenge from face recognition researcher over biased AI

USATODAY - Tech Top Stories

Her research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. Facial recognition technology was already seeping into everyday life -- from your photos on Facebook to police scans of mugshots -- when Joy Buolamwini noticed a serious glitch: Some of the software couldn't detect dark-skinned faces like hers. That revelation sparked the Massachusetts Institute of Technology researcher to launch a project that's having an outsize influence on the debate over how artificial intelligence should be deployed in the real world. Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.


Facial-recognition technology works best if you're a white guy, study says

#artificialintelligence

Facial-recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise -- up to nearly 35 percent for images of darker-skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender. These disparate results, calculated by Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.


Making face recognition less biased doesn't make it less scary

MIT Technology Review

In the past few years, there's been a dramatic rise in the adoption of face recognition, detection, and analysis technology. You're probably most familiar with recognition systems, like Facebook's photo-tagging recommender and Apple's FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance. Many people believe that such systems are both highly accurate and impartial.


'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

The Guardian

Lack of diversity in the artificial intelligence field has reached "a moment of reckoning", according to new findings published by a New York University research center. A "diversity disaster" has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports. The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.


Making face recognition less biased doesn't make it less scary (Technology Review)

#artificialintelligence

Making face recognition less biased doesn't make it less scary Three new papers released in the past week are now bringing much-needed attention to this issue. Last Thursday, Buolamwini released an update to Gender Shades by retesting the systems she'd previously examined and expanding her review to include Amazon's Rekognition and a new system from a small AI company called Kairos. There is some good news. She found that IBM, Face, and Microsoft all improved their gender classification accuracy for darker-skinned women, with Microsoft reducing its error rate to below 2%. On the other hand, Amazon's and Kairos's platforms still had accuracy gaps of 31 and 23 percentage points, respectively, between lighter males and darker females.