The Los Angeles Police Commission on Tuesday said it would review the city Police Department's use of facial recognition software and how it compared with programs in other major cities. The commission did so after citing reporting by The Times this week that publicly revealed the scope of the LAPD's use of facial recognition for the first time -- including that hundreds of LAPD officers have used it nearly 30,000 times since 2009. Critics say police denials of its use are part of a long pattern of deception and that transparency is essential, given potential privacy and civil rights infringements. Commission President Eileen Decker said a subcommittee of the commission would "do a deeper dive" into the technology's use and "work with the department in terms of analyzing the oversight mechanisms" for the system. "It's a good time to take a global look at this issue," Decker said.
The Los Angeles Police Department has used facial-recognition software nearly 30,000 times since 2009, with hundreds of officers running images of suspects from surveillance cameras and other sources against a massive database of mugshots taken by law enforcement. The new figures, released to The Times, reveal for the first time how commonly facial recognition is used in the department, which for years has provided vague and contradictory information about how and whether it uses the technology. The LAPD has consistently denied having records related to facial recognition, and at times denied using the technology at all. The truth is that, while it does not have its own facial-recognition platform, LAPD personnel have access to facial-recognition software through a regional database maintained by the Los Angeles County Sheriff's Department. And between Nov. 6, 2009, and Sept. 11 of this year, LAPD officers used the system's software 29,817 times.
Boston will become the second largest city in the US to ban facial recognition software for government use after a unanimous city council vote. Following San Francisco, which banned facial recognition in 2019, Boston will bar city officials from using facial recognition systems. The ordinance will also bar them from working with any third party companies or organizations to acquire information gathered through facial recognition software. The ordinance was co-sponsored by Councilors Ricardo Arroyo and Michelle Wu, who were especially concerned about the potential for racial bias in the technology, according to a report from WBUR. 'Boston should not be using racially discriminatory technology and technology that threatens our basic rights,' Wu said at a hearing before the vote.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Nearly a decade ago, Santa Cruz was among the first cities in the U.S. to adopt predictive policing. This week, the California city became the first in the country to ban the policy. In a unanimous decision Tuesday, the City Council passed an ordinance that banishes the use of data to predict where crimes may occur and also barred the city from using facial recognition software. In recent years, both predictive policing and facial recognition technology have been criticized as racially prejudiced, often contributing to increased patrols in Black or brown neighborhoods or false accusations against people of color. Predictive policing uses algorithms that encourage officers to patrol locations identified as high-crime based on victim reports.
Concern at the use of facial recognition technology continues as California lawmakers ban its use for the body cameras used by state and local law enforcement officers. It comes after civil rights campaign group in the US called ACLU ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots. The test saw the facial recognition program falsely flag 26 legislators as criminals. And to make matters worse, more than half of the falsely matched lawmakers were people of colour, according to the ACLU. Officials in San Francisco have already banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
Fox News Flash top headlines for Sept. 12 are here. Check out what's clicking on Foxnews.com California could soon become the largest state to ban the use of facial recognition technology in law enforcement body cameras, a significant milestone in the regulation of the burgeoning technology. The State Assembly on Thursday passed AB 1215, a bill that would impose a three-year moratorium on the technology, garnering praise from privacy and civil liberties advocates. The legislation now heads to Gov. Gavin Newsom's desk.
On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."
Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.
In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. San Francisco has become the first U.S. city to ban the use of facial recognition technology by police and city agencies.