Amazon's facial recognition software mistakes women as men and darker-skinned women as men

Daily Mail - Science & tech

Amazon's controversial facial recognition software, Rekognition, is facing renewed criticism. A new study from the MIT Media Lab found that Rekognition may have gender and racial biases. In particular, the software performed worse when identifying gender for females and darker-skinned females. Amazon's controversial facial recognition software, Rekognition, is facing renewed criticism. When the software was presented with a number of female faces, it incorrectly labeled 19 percent of them as male.


Microsoft improves facial recognition software following backlash

Daily Mail - Science & tech

Microsoft has updated it's facial recognition technology in an attempt to make it less'racist'. It follows a study published in March that criticised the technology for being able to more accurately recognise the gender of people with lighter skin tones. The system was found to perform best on males with lighter skin and worst on females with darker skin. The problem largely comes down to the data being used to train the AI system not containing enough images of people with darker skin tones. Experts from the computing firm say their tweaks have significantly reduced these errors, by up to 20 times for people with darker faces.


Amazon receives challenge from face recognition researcher over biased AI

USATODAY - Tech Top Stories

Her research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Buolamwini holds a white mask she had to use so that software could detect her face. Facial recognition technology was already seeping into everyday life -- from your photos on Facebook to police scans of mugshots -- when Joy Buolamwini noticed a serious glitch: Some of the software couldn't detect dark-skinned faces like hers. That revelation sparked the Massachusetts Institute of Technology researcher to launch a project that's having an outsize influence on the debate over how artificial intelligence should be deployed in the real world. Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.


Why facial recognition's racial bias problem is so hard to crack

#artificialintelligence

Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.


Amazon facial-identification software used by police falls short on tests for accuracy and bias, new research finds

Washington Post - Technology News

Facial-recognition software developed by Amazon and marketed to local and federal law enforcement as a powerful crime-fighting tool struggles to pass basic tests of accuracy, such as correctly identifying a person's gender, new research released Thursday says. Researchers with M.I.T. Media Lab also said Amazon's Rekognition system performed more accurately when assessing lighter-skinned faces, raising concerns about how biased results could tarnish the artificial-intelligence technology's use by police and in public venues, including airports and schools. Amazon's system performed flawlessly in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests. Rival facial-recognition systems from Microsoft and other companies performed better but were also error-prone, they said. The problem, AI researchers and engineers say, is that the vast sets of images the systems have been trained on skew heavily toward white men.