Amazon facial-identification software used by police falls short on tests for accuracy and bias, new research finds

Washington Post - Technology News

Facial-recognition software developed by Amazon and marketed to local and federal law enforcement as a powerful crime-fighting tool struggles to pass basic tests of accuracy, such as correctly identifying a person's gender, new research released Thursday says. Researchers with M.I.T. Media Lab also said Amazon's Rekognition system performed more accurately when assessing lighter-skinned faces, raising concerns about how biased results could tarnish the artificial-intelligence technology's use by police and in public venues, including airports and schools. Amazon's system performed flawlessly in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests. Rival facial-recognition systems from Microsoft and other companies performed better but were also error-prone, they said. The problem, AI researchers and engineers say, is that the vast sets of images the systems have been trained on skew heavily toward white men.


Facial-recognition technology works best if you're a white guy, study says

#artificialintelligence

Facial-recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise -- up to nearly 35 percent for images of darker-skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender. These disparate results, calculated by Joy Buolamwini, a researcher at the Massachusetts Institute of Technology Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.


Facial recognition technology is finally more accurate in identifying people of color. Could that be used against immigrants?

Washington Post - Technology News

Microsoft this week announced its facial-recognition system is now more accurate in identifying people of color, touting its progress at tackling one of the technology's biggest biases. But critics, citing Microsoft's work with Immigration and Customs Enforcement, quickly seized on how that improved technology might be used. The agency contracts with Microsoft for a set of cloud-computing tools that the tech giant says is largely limited to office work, but which can also include face recognition. Columbia University professor Alondra Nelson tweeted, "We must stop confusing'inclusion' in more'diverse' surveillance systems with justice and equality." Today's facial-recognition systems more often misidentify people of color because of a long-running data problem: The massive sets of facial images they train on skew heavily toward white men.



Amazon defends its facial-recognition technology, supports calls for legislation

#artificialintelligence

It's unclear how many law-enforcement groups are currently using Amazon's technology; it has been used by police departments in Florida and Oregon. An Amazon spokesperson said the company doesn't share customers' names or use cases without their permission. The company also said it supports "calls for an appropriate national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology." Amazon is the latest major tech company to indicate its support for such legislation. Microsoft has also said it is in favor of laws that regulate how facial-recognition technology can be used.