Amazon may have banned police from using its facial recognition technology, but a new report shows the tech giant is providing thousands of departments with video and audio footage from Ring. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with the Amazon-owned company and hundreds of them have'deadly histories.' Data from sources reveals half of the agencies had at least one fatal encounter in the last five years and altogether are responsible for a third of fatal encounters nationwide. These departments are also involved with the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos and Sean Monterrosa. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with Amazon-owned Ring and hundreds of them have'deadly histories' DailyMail.com
Massachusetts Sen. Ed Markey and Rep. Ayanna Pressley are pushing to ban the federal government's use of facial recognition technology, as Boston last week nixed the city use of the technology and tech giants pause their sale of facial surveillance tools to police. The momentum to stop the government use of facial recognition technology comes in the wake of the police killing of George Floyd in Minneapolis -- a black man killed by a white police officer. Floyd's death has sparked nationwide protests for racial justice and triggered calls for police reform, including ways police track people. Facial recognition technology contributes to the "systemic racism that has defined our society," Markey said on Sunday. "We cannot ignore that facial recognition technology is yet another tool in the hands of law enforcement to profile and oppress people of color in our country," Markey said during an online press briefing.
Members of Congress introduced a new bill on Thursday that would ban government use of biometric technology, including facial recognition tools. Pramila Jayapal and Ayanna Pressley announced the Facial Recognition and Biometric Technology Moratorium Act, which they said resulted from a growing body of research that "points to systematic inaccuracy and bias issues in biometric technologies which pose disproportionate risks to non-white individuals." The bill came just one day after the first documented instance of police mistakenly arresting a man due to facial recognition software. There has been long-standing, widespread concern about the use of facial recognition software from lawmakers, researchers rights groups and even the people behind the technology. Multiple studies over the past three years have repeatedly proven that the tool is still not accurate, especially for people with darker skin.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Nearly a decade ago, Santa Cruz was among the first cities in the U.S. to adopt predictive policing. This week, the California city became the first in the country to ban the policy. In a unanimous decision Tuesday, the City Council passed an ordinance that banishes the use of data to predict where crimes may occur and also barred the city from using facial recognition software. In recent years, both predictive policing and facial recognition technology have been criticized as racially prejudiced, often contributing to increased patrols in Black or brown neighborhoods or false accusations against people of color. Predictive policing uses algorithms that encourage officers to patrol locations identified as high-crime based on victim reports.
Microsoft isn't selling facial recognition tech to local police, but it apparently doesn't have that reservation for federal law enforcement. The ACLU has published emails indicating that Microsoft "aggressively" pitched the Drug Enforcement Administration on facial recognition between at least September 2017 and November 2018 (the emails extend to December 2018). The tech firm went so far as to host DEA staff for numerous demos and training sessions, and there was even a pilot program. The Administration apparently declined to buy the technology in November 2018, in part because of public concerns about the FBI's use of facial recognition data. The ACLU sued the DEA and FBI in October 2019 to obtain records showing how they use facial recognition.
But on Wednesday, June 10, Amazon shocked civil rights activists and researchers when it announced that it would place a one-year moratorium on police use of Rekognition. The move followed IBM's decision to discontinue its general-purpose face recognition system. The next day, Microsoft announced that it would stop selling its system to police departments until federal law regulates the technology. While Amazon made the smallest concession of the three companies, it is also the largest provider of the technology to law enforcement. The decision is the culmination of two years of research and external pressure to demonstrate Rekognition's technical flaws and its potential for abuse.
This week IBM, Amazon, and Microsoft all said they would halt sales of facial recognition to US police and called on Congress to impose rules on use of the technology. A police reform bill introduced in the House of Representatives Monday by prominent Democrats in response to weeks of protest over racist policing practices would do just that. But some privacy advocates say its restrictions aren't tight enough and could legitimize the way police use facial recognition today. "We're concerned," says Neema Guliani, senior legislative counsel for the ACLU in Washington, DC, citing evidence that many facial recognition algorithms are less accurate on darker skin tones. She urges a federal ban on facial recognition "unless and until it can be used in a way that respects civil liberties;" Guliani says it's not clear that that is possible.
To some in the tech industry, facial recognition increasingly looks like toxic technology. IBM is the latest company to declare facial recognition too troubling. CEO Arvind Krishna told members of Congress Monday that IBM would no longer offer the technology, citing the potential for racial profiling and human rights abuse. In a letter, Krishna also called for police reforms aimed at increasing scrutiny and accountability for misconduct. "We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," wrote Krishna, the first non-white CEO in the company's 109-year history.