A European privacy body said it "has doubts" that using facial recognition technology developed by U.S. company Clearview AI is legal in the EU. Clearview AI allows users to link facial images of an individual to a database of more than 3 billion pictures scraped from social media and other sources. According to media reports, over 600 law enforcement agencies worldwide are using the controversial app. But in a statement Wednesday, the European Data Protection Board said that "the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime." The body issued the statement after MEPs raised questions regarding the use of the company's software.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
A Russian company has launched a programme that can identify a stranger among 300 million Twitter users in less than a second. The social media platform has responded to the new software, called "FindFace", saying it its use is in "violation" of its rules and it is taking the matter "very seriously". Trump'obviously aware' Russia behind election hacks, White House says Syria's Assad says Donald Trump will be Russia's'natural ally' Trump'obviously aware' Russia behind election hacks, White House says Syria's Assad says Donald Trump will be Russia's'natural ally' "We see lots of opportunities for Twitter users on the service," Artem Kukharenko, co-founder of NTechLab told BuzzFeed. "We think this is something many people will use," he added, claiming the technology could be used to reduce spam profiles. "Not in the US, but in other countries there is a real problem of politicians, reporters, finding that someone created a fake account for them.