Facial-recognition software developed by Amazon and marketed to local and federal law enforcement as a powerful crime-fighting tool struggles to pass basic tests of accuracy, such as correctly identifying a person's gender, new research released Thursday says. Researchers with M.I.T. Media Lab also said Amazon's Rekognition system performed more accurately when assessing lighter-skinned faces, raising concerns about how biased results could tarnish the artificial-intelligence technology's use by police and in public venues, including airports and schools. Amazon's system performed flawlessly in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests. Rival facial-recognition systems from Microsoft and other companies performed better but were also error-prone, they said. The problem, AI researchers and engineers say, is that the vast sets of images the systems have been trained on skew heavily toward white men.
On May 14, 2019, the San Francisco government became the first major city in the United States to ban the use of facial-recognition technology (paywall) by the government and law enforcement agencies. This ban comes as a part of a broader anti-surveillance ordinance. As of May 14, the ordinance was set to go into effect in about a month. Local officials and civil advocates seem to fear the repercussions of allowing facial-recognition technology to proliferate throughout San Francisco, while supporters of the software claim that the ban could limit technological progress. In this article, I'll examine the ban that just took place in San Francisco, explore the concerns surrounding facial recognition technology, and explain why an outright ban may not be the best course of action.
Deep learning is a technology with a lot of promise: helping computers "see" the world, understand speech, and make sense of language. But away from the headlines about computers challenging humans at everything from spotting faces in a crowd to transcribing speech -- real-world performance has been more mixed. One deep-learning technology whose real-world results have often disappointed has been facial-recognition. In the UK, police in Cardiff and London used facial-recognition systems on multiple occasions in 2017 to flag persons of interest captured on video at major events. Unfortunately, more than 90% of people picked out by these systems were false matches.