The UK Court of Appeal has unanimously reached a decision against a face-recognition system used by South Wales Police. The judgment, which called the use of automated face recognition (AFR) "unlawful", could have ramifications for the widespread use of such technology across the UK. But there is disagreement about exactly what the consequences will be. Ed Bridges, who initially launched a case after police cameras digitally analysed his face in the street, had appealed, with the support of personal rights campaign group Liberty, against the use of face recognition by police. The police force claimed in court that the technology was similar to the use of closed-circuit television (CCTV) cameras in cities.
In the 15th century, an insidious scourge stalked Europe. It threatened to put people out of work, ruin their brains, and even take them further away from God. According to Abbot Johannes Trithemius, it was the printing press. Now, of course, the printing press and its many effects are seen as not just good, but foundational to modern societies -- despite the fact that the printing press was also used to produce the Adolf Hitler manifesto "Mein Kampf." But this is how we tend to deal with technology: as an often-ambivalent thing around which we work to highlight the positive and mitigate the negative.
In the 15th century, an insidious scourge stalked Europe. It threatened to put people out of work, ruin their brains, and even take them further away from God. According to Abbot Johannes Trithemius, it was the printing press. Now, of course, the printing press and its many effects are seen as not just good, but foundational to modern societies -- despite the fact that the printing press was also used to produce the Adolf Hitler manifesto "Mein Kampf." But this is how we tend to deal with technology: as an often-ambivalent thing around which we work to highlight the positive and mitigate the negative. Are there, however, some technologies so heavily slanted to the negative that we should just outright ban them?
A European privacy body said it "has doubts" that using facial recognition technology developed by U.S. company Clearview AI is legal in the EU. Clearview AI allows users to link facial images of an individual to a database of more than 3 billion pictures scraped from social media and other sources. According to media reports, over 600 law enforcement agencies worldwide are using the controversial app. But in a statement Wednesday, the European Data Protection Board said that "the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime." The body issued the statement after MEPs raised questions regarding the use of the company's software.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
London (CNN Business)IBM is canceling its facial recognition programs and calling for an urgent public debate on whether the technology should be used in law enforcement. In a letter to Congress on Monday, IBM (IBM) CEO Arvind Krishna said the company wants to work with lawmakers to advance justice and racial equity through police reform, educational opportunities and the responsible use of technology. "We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," he said, noting that the company no longer offers general purpose facial recognition or analysis software. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values," he added. Krishna is of Indian origin and IBM's first CEO of color.
The rollout of facial recognition cameras in London is facing disruption as citizens are now using face coverings that could potentially incapacitate the technology. The United Kingdom has been a keen adopter of surveillance technology including facial recognition cameras in recent years, despite concerns that widespread spying erodes citizen rights to privacy. Last year, the Information Commissioner's Office (ICO) launched an investigation into a trial of facial recognition cameras installed at King's Cross, a busy underground and overground train station, based on claims that commuters and passers-by were being surveilled without explicit consent. At the time, UK Information Commissioner Elizabeth Denham called the scheme "a potential threat to privacy that should concern us all." The Metropolitan Police has also launched its own trials at busy hotspots in the capital.
As the first step on the road to a powerful, high tech surveillance apparatus, it was a little underwhelming: a blue van topped by almost comically intrusive cameras, a few police officers staring intently but ineffectually at their smartphones and a lot of bemused shoppers. As unimpressive as the moment may have been, however, the decision by London's Metropolitan Police to expand its use of live facial recognition (LFR) marks a significant shift in the debate over privacy, security and surveillance in public spaces. Despite dismal accuracy results in earlier trials, the Metropolitan Police Service (MPS) has announced they are pushing ahead with the roll-out of LFR at locations across London. MPS say that cameras will be focused on a small targeted area "where intelligence suggests [they] are most likely to locate serious offenders," and will match faces against a database of individuals wanted by police. The cameras will be accompanied by clear signposting and officers handing out leaflets (it is unclear why MPS thinks that serious offenders would choose to walk through an area full of police officers handing out leaflets to passersby).
The use of facial recognition by police and other law enforcement is proving divisive, with Verdict readers split over its use. In a poll on Verdict that saw responses from 644 readers between 24 January and 7 February, the majority said they were not happy with the use of facial recognition by police, but only by a slim margin. The response comes as the EU is considering a ban on the use of facial recognition until the technology reaches a greater stage of maturity. A draft white paper, which was first published by the news website EURACTIV in January, showed that a temporary ban was being considered by the European Commission. It proposed that "use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g.
The London Metropolitan Police have announced that it intends to begin using Live Facial Recognition (LFR) technology in various parts of the UK's capital city. The police explained that the technology will be "intelligence-led and deployed to specific locations in London," used for five to six hours at a time, with bespoke lists drawn up of "wanted individuals." As the BBC reports, the police claim the technology is able to identify 70 percent of wanted suspects while only generating false alerts once per 1,000 people detected by the system. The cameras will be rolled out within a month and clearly signposted. Police officers are going to hand out leaflets about the facial recognition technology and consult with local communities.