The use of facial recognition by police and other law enforcement is proving divisive, with Verdict readers split over its use. In a poll on Verdict that saw responses from 644 readers between 24 January and 7 February, the majority said they were not happy with the use of facial recognition by police, but only by a slim margin. The response comes as the EU is considering a ban on the use of facial recognition until the technology reaches a greater stage of maturity. A draft white paper, which was first published by the news website EURACTIV in January, showed that a temporary ban was being considered by the European Commission. It proposed that "use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g.
Amazon followed suit a couple of days later putting a temporary, year-long ban on facial recognition contracts with American police departments. Finally, Microsoft said that they, too, would no longer sell facial recognition to American police departments without federal regulation. Details aside, these statements all share the implicit confession of the danger that facial recognition poses to human rights and democracy. This self-containment coming out of Big Tech does not, however, address these very same dangers that exist in the EU. Although these technologies are used within EU member states as well, the decisions from IBM, Amazon and Microsoft only apply to the American context.
Over the past few months, high-profile incidents in the United Kingdom, one of the most surveilled societies in the world, forced people to consider how facial recognition will be used there. Brexit taking up most of the oxygen in the room hasn't made that debate any easier, but in conversations with VentureBeat, three experts from different backgrounds -- Ada Lovelace Institute director Carly Kind, the U.K.'s surveillance camera commissioner Tony Porter, and University of Essex professor Daragh Murray, who studies police use of facial recognition -- all agree that the U.K. needs to find a middle ground. All three agree that years of Brexit debate have stifled necessary reform, and that leaving the European Union could carry consequences for years to come as police and businesses continue experiments with facial recognition in the U.K. They also worry that an inability to take action could lead to calls for a ban or overregulation, or far more dystopian scenarios of facial recognition everywhere. The Terminator's got serious competition for symbolizing the fear of technology trampling human rights. Facial recognition has become a major issue around the globe due both to its deeply personal and pervasive nature as well as advances in AI that now make it work in real time.
The high-profile case of a Black man wrongly arrested earlier this year wasn't the first misidentification linked to controversial facial recognition technology used by Detroit police, the Free Press has learned. Last year, a 25-year-old Detroit man was wrongly accused of a felony for supposedly reaching into a teacher's vehicle, grabbing a cell phone and throwing it, cracking the screen and breaking the case. Detroit police used facial recognition technology in that investigation, too. It identified Michael Oliver as an investigative lead. After that hit, the teacher who had his phone snatched from his hands identified Oliver in a photo lineup as the person responsible.
Police in London are moving ahead with a deploying a facial recognition camera system despite privacy concerns and evidence that the technology is riddled with false positives. The Metropolitan Police, the U.K.'s biggest police department with jurisdiction over most of London, announced Friday it would begin rolling out new "live facial recognition" cameras in London, making the capital one of the largest cities in the West to adopt the controversial technology. The "Met," as the police department is known in London, said in a statement the facial recognition technology, which is meant to identify people on a watch list and alert police to their real-time location, would be "intelligence-led" and deployed to only specific locations. It's expected to be rolled out as soon as next month. However, privacy activists immediately raised concerns, noting that independent reviews of trials of the technology showed a failure rate of 81%.