Voted and approved by state assembly and the senate, New York will suspend implementation of AI facial recognition technology in schools for two years on Wednesday. State Governor Andrew Cuomo has signed the legislation into law. The decision is the aftermath of a lawsuit filed in June by the New York Civil Liberties Union on behalf of student parents, whose school district adopted the technology earlier this year. Facial recognition technology remains the most controversial AI deployment in the United States: cities like San Francisco, Somerville, and Oakland have already banned the technology in 2019. Moreover, a letter was sent to the US Privacy and Civil Liberties Board (PCLOB) in January, requesting the US government to halt relevant applications while waiting for further review.
The New York legislature today passed a moratorium banning the use of facial recognition and other forms of biometric identification in schools until 2022. The bill, which has yet to be signed by Governor Andrew Cuomo, comes in response to the planned launch of facial recognition by the Lockport City School District and appears to be the first in the nation to explicitly regulate the use of the technology in schools. In January, Lockport Schools became one of the only U.S. school districts to adopt facial recognition in all of its K-12 buildings, which serve about 5,000 students. Proponents argued the $1.4 million system could keep students safe by enforcing watchlists and sending alerts when it detected someone dangerous (or otherwise unwanted). But critics said it could be used to surveil students and build a database of sensitive information about people's faces, which the school district then might struggle to keep secure.
The ban comes after civil liberties groups highlighted what they described as faults in facial recognition algorithms after NIST found most facial recognition software was more likely to misidentify people of colour than white people. The Boston ban follows a ban imposed by San Francisco on the use of face recognition technology last year. The ban prevents any city employee using facial recognition or asking a third party to use the technology on its behalf. Boston's police department said it had not used the technology over what it called reliability fears, though it's clear the best systems are reasonably accurate in average working conditions. Critics also opposed the technology on the basis it might discourage citizens' rights to protest.
Boston will become the second largest city in the US to ban facial recognition software for government use after a unanimous city council vote. Following San Francisco, which banned facial recognition in 2019, Boston will bar city officials from using facial recognition systems. The ordinance will also bar them from working with any third party companies or organizations to acquire information gathered through facial recognition software. The ordinance was co-sponsored by Councilors Ricardo Arroyo and Michelle Wu, who were especially concerned about the potential for racial bias in the technology, according to a report from WBUR. 'Boston should not be using racially discriminatory technology and technology that threatens our basic rights,' Wu said at a hearing before the vote.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Nearly a decade ago, Santa Cruz was among the first cities in the U.S. to adopt predictive policing. This week, the California city became the first in the country to ban the policy. In a unanimous decision Tuesday, the City Council passed an ordinance that banishes the use of data to predict where crimes may occur and also barred the city from using facial recognition software. In recent years, both predictive policing and facial recognition technology have been criticized as racially prejudiced, often contributing to increased patrols in Black or brown neighborhoods or false accusations against people of color. Predictive policing uses algorithms that encourage officers to patrol locations identified as high-crime based on victim reports.
"When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties," Matt Cagle, a technology and civil liberties lawyer with the ACLU of Northern California, said in a statement. "Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community -- not against it -- to make that happen."
That's why the announcements by IBM, Amazon and Microsoft were a success for activists -- a rare retreat by some of Silicon Valley's biggest names over a key new technology. This came from years of work by researchers including Joy Buolamwini to make the case that facial recognition software is biased. A test commissioned by the ACLU of Northern California found Amazon's software called Rekognition misidentified 28 lawmakers as people arrested in a crime. That happens in part because the systems are trained on data sets that are themselves skewed.
As protests against police brutality and systemic racism continue around the globe, there's been a growing discussion around the importance of blurring images from these protests before posting them online. After all, as John Oliver points out in the Last Week Tonight video above, "there are currently serious concerns that facial recognition is being used to identify Black Lives Matter protesters." Oliver follows that up with a 20-minute deep dive into the dangers of the technology, including ethical concerns, the companies harvesting our photos to sell to law enforcement agencies, and the fact that facial recognition can be biased and inaccurate (studies have even found that it's more likely to misidentify people of colour than white people, for instance). "Clearly, what we really need to do is put limits on how this technology can be used, and some locations have laws in place already," says Oliver. "San Francisco banned facial recognition last year.
Amazon announced on Wednesday it was implementing a "one-year moratorium" on police use of Rekognition, its facial-recognition technology. Lawmakers and civil liberties groups have expressed growing alarm over the tool's potential for misuse by law enforcement for years, particularly against communities of color. Now, weeks into worldwide protests against police brutality and racism sparked by the killing of George Floyd, Amazon appears to have acknowledged these concerns. In a short blog post about the decision, the tech giant said it hopes the pause "might give Congress enough time to implement appropriate rules" for the use of facial-recognition technology, which is largely unregulated in the US. Critics have said that the tech could easily be abused by the government, and they cite studies showing tools like Rekognition misidentify people of color at higher rates than white people.