Goto

Collaborating Authors

Results


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Why Clearview AI is a threat to us all

#artificialintelligence

Clearview AI was founded in 2017 by Richard Schwartz and now-CEO Hoan Ton-That. The company counts Peter Thiel and AngelList founder Naval Ravikant among its investors. Clearview's technology is actually quite simple: Its facial recognition algorithm compares the image of a person's face from security camera footage to an existing database of potential matches. Marketed primarily to law enforcement agencies, the Clearview app allows users to take and upload a picture of a person then view all of the public images of that person as well as links to where those photos were published. Basically, if you're caught on camera anywhere in public, local law enforcement can use that image to mine your entire online presence for information about you, effectively ending any semblance of personal privacy.


Facial recognition regulation is surprisingly bipartisan

#artificialintelligence

Bipartisanship in modern politics can seem kind of like an unbelievable, mythical creature. But in recent months, as Congress considered regulation of one of the most controversial topics it faces -- how, when, or if to use facial recognition -- we've gotten glimpses of a political unicorn. In two House Oversight and Reform committee hearings last summer, some of the most prominent Republicans and Democrats in the United States Congress joined together in calls for legislative reform. Proponents of regulation ranged from Rep. Alexandria Ocasio-Cortez (D-NY) to Rep. Jim Jordan (R-OH), a frequent Trump supporter on cable news. On Friday, Jordan was also appointed to the House Intelligence Committee to confront witnesses in public presidential impeachment hearings that begin this week.


San Francisco Could Be First to Ban Facial Recognition Tech

WIRED

If a local tech industry critic has his way, San Francisco could become the first US city to ban its agencies from using facial recognition technology. Aaron Peskin, a member of the city's Board of Supervisors, proposed the ban Tuesday as part of a suite of rules to enhance surveillance oversight. In addition to the ban on facial recognition technology, the ordinance would require city agencies to gain the board's approval before buying new surveillance technology, putting the burden on city agencies to publicly explain why they want the tools as well as the potential harms. It would also require an audit of any existing surveillance tech--things like gunshot-detection systems, surveillance cameras, or automatic license plate readers--in use by the city; officials would have to report annually on how the technology was used, community complaints, and with whom they share the data. Those rules would follow similar ordinances passed in nearby Oakland and Santa Clara County.


The Face ID ruling is a big win for digital rights. Here's what needs to happen next.

Mashable

Now, if the cops try to force you to unlock your iPhone with your face, the law might actually be on your side. Previously, other courts have ruled that the police could make suspects unlock their phones with Touch ID, even though legally they couldn't force that same suspect to give up their passcode. Digital rights experts hope that a ruling in California, however, is a step toward changing that precedent. SEE ALSO: So how worried should we be about Apple's Face ID? Recently, California magistrate Judge Kanis Westmore denied a request for a warrant to compel suspects to unlock their phones using Face ID and Touch ID. In a written opinion (via Apple Insider) from Jan. 10, she said she made her decision in part because forcing someone to give up a passcode -- whether alphanumeric or biometric -- would violate their Fifth Amendment right against self-incrimination.


Amazon investors press company to stop selling 'racially biased' surveillance tech to government agencies

FOX News

Why the American Civil Liberties Union is calling out Amazon's facial recognition tool, and what the ACLU found when it compared photos of members of Congress to public arrest photos. A group of Amazon shareholders is pushing the tech giant to stop selling its controversial facial recognition technology to U.S. government agencies, just days after a coalition of 85 human rights, faith, and racial justice groups demanded in an open letter that Jeff Bezos' company stop marketing surveillance technology to the feds. Over the last year, the "Rekognition" technology, which has been reportedly marketed to the U.S. Immigration and Customs Enforcement (ICE), has come under fire from immigrants' rights groups and privacy advocates who argue that it can be misused and ultimately lead to racially biased outcomes. A test of the technology by the American Civil Liberties Union (ACLU) showed that 28 members of Congress, mostly people of color, were incorrectly identified as police suspects. According to media reports and the ACLU, Amazon has already sold or marketed "Rekognition" to law enforcement agencies in three states.


What's Left for Congress to Ask Big Tech Firms? A Lot

WIRED

Executives from Amazon, Apple, AT&T, Charter Communications, Google, and Twitter are heading to Washington Wednesday to testify before the Senate Commerce Committee on the topic of privacy. As ever, the main question will be: Are these companies doing enough to protect consumer privacy, and if not, what should Congress do about it? It has been the backdrop to just about every hearing with tech leaders over the last year--and there have been many. And yet, the threat of regulation carries new weight this time around. Over the summer, California passed the country's first data privacy bill, giving residents unprecedented control over their data.


Amazon face recognition wrongly tagged lawmakers as police suspects, fueling racial bias concerns

FOX News

Amazon's Rekognition facial surveillance technology has wrongly tagged 28 members of Congress as police suspects, the ACLU says. Amazon's Rekognition facial surveillance technology has wrongly tagged 28 members of Congress as police suspects, according to ACLU research, which notes that nearly 40 percent of the lawmakers identified by the system are people of color. In a blog post, Jacob Snow, technology and civil liberties attorney for the ACLU of Northern California, said that the false matches were made against a mugshot database. The matches were also disproportionately people of color, he said. These include six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis, D-Ga.


Privacy Scores Big Wins, as the Data Backlash Grows

WIRED

On Monday, police in Florida abandoned a pilot program that had put Amazon's facial recognition powers at their disposal. On Wednesday, representatives from the country's most powerful technology companies will gather in San Francisco to take a hard look at the industry's approach to privacy. And on Thursday, the California legislature will vote on a bill that would grant internet users more power over their data than ever before in the United States. Any of these alone would mark a good week for privacy. Together, and combined with even more major advancements from earlier this month, they represent a tectonic shift.


Amazon facial recognition software raises privacy concerns with the ACLU

#artificialintelligence

Amazon hasn't exactly kept Rekognition under wraps. In late 2016, the software giant talked up its facial detection software in a relatively benign AWS post announcing that the tech was already being implemented by The Washington County Sheriff's Office in Oregon for suspect identification. The ACLU of Northern California is shining more light on the tech this week, however, after announcing that it had obtained documents shedding more light on the service it believes "raises profound civil liberties and civil rights concerns." The documents in question highlight Washington County's database of 300,000 mug shot photos and a mobile app designed specifically for deputies to cross-reference faces. They also note that Amazon has solicited the country to reach out to other potential customers for the service, including a company that makes body cameras.