Facial recognition technology used by the UK police is making thousands of mistakes, a new report has found. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals. According to police figures, the system often makes more incorrect matches than correct ones. Experts warned the technology could lead to false arrests and described it as a'dangerously inaccurate policing tool'. South Wales Police has been testing an automated facial recognition system.
Facial recognition technology used by the UK police is making thousands of mistakes - and now there could be legal repercussions. Civil liberties group, Big Brother Watch, has teamed up with Baroness Jenny Jones to ask the government and the Met to stop using the technology. They claim the use of facial recognition has proven to be'dangerously authoritarian', inaccurate and a breach if rights protecting privacy and freedom of expression. If their request is rejected, the group says it will take the case to court in what will be the first legal challenge of its kind. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals.
Facial recognition software used by the UK's biggest police force has returned false positives in more than 98 per cent of alerts generated, The Independent can reveal, with the country's biometrics regulator calling it "not yet fit for use". The Metropolitan Police's system has produced 104 alerts of which only two were later confirmed to be positive matches, a freedom of information request showed. In its response the force said it did not consider the inaccurate matches "false positives" because alerts were checked a second time after they occurred. Facial recognition technology scans people in a video feed and compares their images to pictures stored in a reference library or watch list. It has been used at large events like the Notting Hill Carnival and a Six Nations Rugby match.
Thousands of attendees of the 2017 Champions League final in Cardiff, Wales were mistakenly identified as potential criminals by facial recognition technology used by local law enforcement. According to the Guardian, the South Wales police scanned the crowd of more than 170,000 people who traveled to the nation's capital for the soccer match between Real Madrid and Juventus. The cameras identified 2,470 people as criminals. Having that many potential lawbreakers in attendance might make sense if the event was, say, a convict convention, but seems pretty high for a soccer match. As it turned out, the cameras were a little overly-aggressive in trying to spot some bad guys.
Automated facial recognition poses one of the greatest threats to individual freedom and should be banned from use in public spaces, according to the director of the campaign group Liberty. Martha Spurrier, a human rights lawyer, said the technology had such fundamental problems that, despite police enthusiasm for the equipment, its use on the streets should not be permitted. She said: "I don't think it should ever be used. It is one of, if not the, greatest threats to individual freedom, partly because of the intimacy of the information it takes and hands to the state without your consent, and without even your knowledge, and partly because you don't know what is done with that information." Police in England and Wales have used automated facial recognition (AFR) to scan crowds for suspected criminals in trials in city centres, at music festivals, sports events and elsewhere.