Police use of automatic facial recognition technology to search for people in crowds is lawful, the high court in Cardiff has ruled. Although the mass surveillance system interferes with the privacy rights of those scanned by security cameras, a judge has concluded, it is not illegal. The legal challenge was brought by Ed Bridges, a former Liberal Democrat councillor from Cardiff, who noticed the cameras when he went out to buy a lunchtime sandwich. He was supported by the human rights organisation Liberty. Bridges said he was distressed by police use of the technology, which he believes captured his image while out shopping and later at a peaceful protest against the arms trade.
British privacy activist Ed Bridges is set to appeal a landmark ruling that endorses the "sinister" use of facial recognition technology by the police to hunt for suspects. In what is believed to be the world's first case of its kind, Bridges told the High Court in Wales that the local police breached his rights by scanning his face without consent. "This sinister technology undermines our privacy and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance," Bridges said in a statement. But judges said the police's use of facial recognition technology was lawful and legally justified. Civil rights group Liberty, which represented 36-year-old Bridges, said it would appeal the "disappointing" decision, while police chiefs said they understood the fears of the public.
The last day of January 2019 was sunny, yet bitterly cold in Romford, east London. Shoppers scurrying from retailer to retailer wrapped themselves in winter coats, scarves and hats. The temperature never rose above three degrees Celsius. For police officers positioned next to an inconspicuous blue van, just metres from Romford's Overground station, one man stood out among the thin winter crowds. The man, wearing a beige jacket and blue cap, had pulled his jacket over his face as he moved in the direction of the police officers.
A legal code of practice is needed before face recognition technology can be safely deployed by police forces in public places, says the UK's data regulator. The Information Commissioner's Office (ICO) said it has serious concerns about the use of the technology as it relies on large amounts of personal information, in a blog post. Current laws, codes and practices "will not drive the ethical and legal approach that's needed to truly manage the risk that this technology presents," said information commissioner Elizabeth Denham. She called for police forces to be compelled to show justification that face recognition is "strictly necessary, balanced and effective" in each case it is deployed. Face recognition can map faces in a crowd by measuring the distance between facial features, then compare results with a "watch list" of images, which can include suspects, missing people and persons of interest.
Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country's only surveillance regulator. The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology. "Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it," Porter said. "Police are increasingly wearing body cameras. What are the appropriate limits for their use? "The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency." The watchdog's comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making. Lord Evans, a former MI5 chief, told the Sunday Telegraph that "it was very difficult to find out where AI is being used in the public sector" and that "at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms". AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful. Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate. Durham police have spent three years evaluating an AI tool devised by Cambridge University to predict whether an arrested person is likely to reoffend and so should not be released on bail. Similar technologies used in the US, where they are also guide sentencing, have been accused of concluding that black people are more likely to be future criminals, but the results of the British trial are yet to be made public. The committee is due to report to Boris Johnson in February, but Porter said the task was urgent because of the rapid pace of technological change and an unclear system of regulation in which no single body had oversight. The information commissioner is responsible for the use of personal data but not surveillance, while Porter's office regulates the use of CCTV systems and all technologies attached to them, including facial recognition and lip-reading software. "We've been calling for a wider review for months," Porter said. "The SCC, for example, is the only surveillance regulator in England and Wales and we date back to when the iPhone 5 was new and exciting.