Goto

Collaborating Authors

Results


Boston becomes the second largest city in the US to ban facial recognition software

Daily Mail - Science & tech

Boston will become the second largest city in the US to ban facial recognition software for government use after a unanimous city council vote. Following San Francisco, which banned facial recognition in 2019, Boston will bar city officials from using facial recognition systems. The ordinance will also bar them from working with any third party companies or organizations to acquire information gathered through facial recognition software. The ordinance was co-sponsored by Councilors Ricardo Arroyo and Michelle Wu, who were especially concerned about the potential for racial bias in the technology, according to a report from WBUR. 'Boston should not be using racially discriminatory technology and technology that threatens our basic rights,' Wu said at a hearing before the vote.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Santa Cruz becomes first U.S. city to ban predictive policing

Los Angeles Times

Nearly a decade ago, Santa Cruz was among the first cities in the U.S. to adopt predictive policing. This week, the California city became the first in the country to ban the policy. In a unanimous decision Tuesday, the City Council passed an ordinance that banishes the use of data to predict where crimes may occur and also barred the city from using facial recognition software. In recent years, both predictive policing and facial recognition technology have been criticized as racially prejudiced, often contributing to increased patrols in Black or brown neighborhoods or false accusations against people of color. Predictive policing uses algorithms that encourage officers to patrol locations identified as high-crime based on victim reports.


Microsoft won't sell police its facial-recognition technology, following similar moves by Amazon and IBM

Washington Post - Technology News

"When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties," Matt Cagle, a technology and civil liberties lawyer with the ACLU of Northern California, said in a statement. "Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community -- not against it -- to make that happen."


Black Lives Matter could change facial recognition forever -- if Big Tech doesn't stand in the way

Washington Post - Technology News

That's why the announcements by IBM, Amazon and Microsoft were a success for activists -- a rare retreat by some of Silicon Valley's biggest names over a key new technology. This came from years of work by researchers including Joy Buolamwini to make the case that facial recognition software is biased. A test commissioned by the ACLU of Northern California found Amazon's software called Rekognition misidentified 28 lawmakers as people arrested in a crime. That happens in part because the systems are trained on data sets that are themselves skewed.


John Oliver dives into the shady uses of facial recognition

Mashable

As protests against police brutality and systemic racism continue around the globe, there's been a growing discussion around the importance of blurring images from these protests before posting them online. After all, as John Oliver points out in the Last Week Tonight video above, "there are currently serious concerns that facial recognition is being used to identify Black Lives Matter protesters." Oliver follows that up with a 20-minute deep dive into the dangers of the technology, including ethical concerns, the companies harvesting our photos to sell to law enforcement agencies, and the fact that facial recognition can be biased and inaccurate (studies have even found that it's more likely to misidentify people of colour than white people, for instance). "Clearly, what we really need to do is put limits on how this technology can be used, and some locations have laws in place already," says Oliver. "San Francisco banned facial recognition last year.


Amazon Won't Let Police Use Its Facial-Recognition Tech for One Year

#artificialintelligence

Amazon announced on Wednesday it was implementing a "one-year moratorium" on police use of Rekognition, its facial-recognition technology. Lawmakers and civil liberties groups have expressed growing alarm over the tool's potential for misuse by law enforcement for years, particularly against communities of color. Now, weeks into worldwide protests against police brutality and racism sparked by the killing of George Floyd, Amazon appears to have acknowledged these concerns. In a short blog post about the decision, the tech giant said it hopes the pause "might give Congress enough time to implement appropriate rules" for the use of facial-recognition technology, which is largely unregulated in the US. Critics have said that the tech could easily be abused by the government, and they cite studies showing tools like Rekognition misidentify people of color at higher rates than white people.


Amazon bans police use of facial recognition software for one year amid national protests against racial inequality

USATODAY - Tech Top Stories

Amazon announced Wednesday that it is pausing police use of its facial recognition software for one year following nationwide pressure on tech companies to address potential bias. While Amazon did not specify a reason for its decision, racial injustice has been at the forefront of ongoing protests in the wake of the death of George Floyd, who died May 25 after a white Minneapolis police officer pressed his knee into the handcuffed black man's neck for nearly nine minutes. "We've advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge," Amazon said in a statement posted to the company's blog website. Researchers have long criticized the technology for producing inaccurate results for people with darker skin, while other studies have shown technological bias against minorities and young people. Nicole Ozer, technology and civil liberties director with the American Civil Liberties Union of Northern California, said in a statement that the organization was "glad the company is finally recognizing the dangers face recognition poses to Black and Brown communities and civil rights more broadly," but that it was not enough to combat the threat to "our civil rights and civil liberties."


California Activists Ramp Up Fight Against Facial-Recognition Technology

WSJ.com: WSJD - Technology

"This is a bill being sold as a privacy bill, but it's a wolf in sheep's clothing," Matt Cagle, an attorney for the American Civil Liberties Union of Northern California, said in an interview. The ACLU, Electronic Frontier Foundation and other civil liberties groups held a virtual rally Thursday night to rail against the bill, calling it vaguely worded and potentially dangerous for low-income communities hit hard by the coronavirus. Their remarks were the latest shots fired from a campaign to halt the legislation. The bill's fate in California--which has pushed for more aggressive privacy protections in recent years--could foreshadow how a potentially huge market for facial recognition technology is regulated by other states. The bill calls for companies and agencies that use facial recognition tools in areas accessible to the public to "provide a conspicuous and contextually appropriate notice" that faces may get scanned.


Despite what you may think, face recognition surveillance isn't inevitable

#artificialintelligence

Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.