Amazon tried to sell its AI-powered facial recognition technology to the US government to help catch illegal immigrants. It pitched its product to Immigration and Customs Enforcement (ICE) officials this summer, leaked emails show. The emails were first reported by The Daily Beast as part of a Freedom of Information Act request from the advocacy group Project on Government Oversight. Emails revealed the intention of Amazon to use its controversial Rekognition face-scanning technology to help with the country's security. The facial recognition technology has attracted scrutiny since it was revealed Amazon had sold it to several US police departments.
Amazon's facial recognition technology, Rekognition, continues to cause controversy. In documents recently obtained by BuzzFeed News, we now have a behind-the-scenes look at how Orlando police have been using the technology. After the city let the original pilot program expire after public outcry, Orlando started a second pilot program with an "increased" number of face-scanning cameras. Amazon's Rekognition is described broadly as a visual analysis tool. But, deployed by law enforcement, it can scan faces caught on camera and match them against faces in criminal databases.
The accuracy of police facial recognition systems has been criticised by a UK privacy group. Two forces have been testing facial recognition cameras at public events in an effort to catch wanted criminals. Big Brother Watch said its investigation showed the technology was "dangerous and inaccurate" as it had wrongly flagged up a "staggering" number of innocent people as suspects. But police have defended its use and say additional safeguards are in place. Police facial recognition cameras have been trialled at events such as football matches, festivals and parades.
SAN FRANCISCO - San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies, reflecting a growing backlash against a technology that's creeping into airports, motor vehicle departments, stores, stadiums and home security cameras. Government agencies around the U.S. have used the technology for more than a decade to scan databases for suspects and prevent identity fraud. But recent advances in artificial intelligence have created more sophisticated computer vision tools, making it easier for police to pinpoint a missing child or protester in a moving crowd or for retailers to analyze a shopper's facial expressions as they peruse store shelves. Efforts to restrict its use are getting pushback from law enforcement groups and the tech industry, though it's far from a united front. Microsoft, while opposed to an outright ban, has urged lawmakers to set limits on the technology, warning that leaving it unchecked could enable an oppressive dystopia reminiscent of George Orwell's novel "1984."
Facial recognition technology used by the UK police is making thousands of mistakes - and now there could be legal repercussions. Civil liberties group, Big Brother Watch, has teamed up with Baroness Jenny Jones to ask the government and the Met to stop using the technology. They claim the use of facial recognition has proven to be'dangerously authoritarian', inaccurate and a breach if rights protecting privacy and freedom of expression. If their request is rejected, the group says it will take the case to court in what will be the first legal challenge of its kind. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals.