Goto

Collaborating Authors

Results


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


It's too late to ban face recognition โ€“ here's what we need instead

New Scientist

Calls for an outright ban on face recognition technology are growing louder, but it is already too late. Given its widespread use by tech companies and the police, permanently rolling back the technology is impossible. It was widely reported this week that the European Commission is considering a temporary ban on the use of face recognition in public spaces. The proposed hiatus of up to five years, according to a white paper obtained by news site Politico, would aim to give politicians in Europe time to develop measures to mitigate the potential risks associated with the technology. Several US cities, including San Francisco, are mulling or have enacted similar bans.


Despite what you may think, face recognition surveillance isn't inevitable

#artificialintelligence

Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.


Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning

arXiv.org Artificial Intelligence

A growing body of work shows that many problems in fairness, accountability, transparency, and ethics in machine learning systems are rooted in decisions surrounding the data collection and annotation process. In spite of its fundamental nature however, data collection remains an overlooked part of the machine learning (ML) pipeline. In this paper, we argue that a new specialization should be formed within ML that is focused on methodologies for data collection and annotation: efforts that require institutional frameworks and procedures. Specifically for sociocultural data, parallels can be drawn from archives and libraries. Archives are the longest standing communal effort to gather human information and archive scholars have already developed the language and procedures to address and discuss many challenges pertaining to data collection such as consent, power, inclusivity, transparency, and ethics & privacy. We discuss these five key approaches in document collection practices in archives that can inform data collection in sociocultural ML. By showing data collection practices from another field, we encourage ML research to be more cognizant and systematic in data collection and draw from interdisciplinary expertise.


Why facial recognition's racial bias problem is so hard to crack

#artificialintelligence

Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.


An A.I. Pioneer Wants an FDA for Facial Recognition

#artificialintelligence

Erik Learned-Miller is one reason we talk about facial recognition at all. In 2007, years before the current A.I. boom made "deep learning" and "neural networks" common phrases in Silicon Valley, Learned-Miller and three colleagues at the University of Massachusetts Amherst released a dataset of faces titled Labelled Faces in the Wild. To you or me, Labelled Faces in the Wild just looks like folders of unremarkable images. You can download them and look for yourself. There's boxer Joe Gatti, gloves raised mid-fight.


SF Facial Recognition Ban: What Now For AI (Artificial Intelligence)?

#artificialintelligence

Recently San Francisco passed โ€“ in an 8-to-1 vote -- a ban on local agencies to use facial recognition technologies. The move is likely not to be a one-off either. Other local governments are exploring similar prohibitions, so as to deal with the potential Orwellian risks that the technology may harm people's privacy. "In the mad dash towards AI and analytics, we often turn a blind eye to their long-range societal implications which can lead to startling conclusions," said Kon Leong, who is the CEO of ZL Technologies. Yet some tech companies are getting proactive.


Making face recognition less biased doesn't make it less scary

MIT Technology Review

In the past few years, there's been a dramatic rise in the adoption of face recognition, detection, and analysis technology. You're probably most familiar with recognition systems, like Facebook's photo-tagging recommender and Apple's FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance. Many people believe that such systems are both highly accurate and impartial.


Amazon investors press company to stop selling 'racially biased' surveillance tech to government agencies

FOX News

Why the American Civil Liberties Union is calling out Amazon's facial recognition tool, and what the ACLU found when it compared photos of members of Congress to public arrest photos. A group of Amazon shareholders is pushing the tech giant to stop selling its controversial facial recognition technology to U.S. government agencies, just days after a coalition of 85 human rights, faith, and racial justice groups demanded in an open letter that Jeff Bezos' company stop marketing surveillance technology to the feds. Over the last year, the "Rekognition" technology, which has been reportedly marketed to the U.S. Immigration and Customs Enforcement (ICE), has come under fire from immigrants' rights groups and privacy advocates who argue that it can be misused and ultimately lead to racially biased outcomes. A test of the technology by the American Civil Liberties Union (ACLU) showed that 28 members of Congress, mostly people of color, were incorrectly identified as police suspects. According to media reports and the ACLU, Amazon has already sold or marketed "Rekognition" to law enforcement agencies in three states.


Amazon face recognition wrongly tagged lawmakers as police suspects, fueling racial bias concerns

FOX News

Amazon's Rekognition facial surveillance technology has wrongly tagged 28 members of Congress as police suspects, the ACLU says. Amazon's Rekognition facial surveillance technology has wrongly tagged 28 members of Congress as police suspects, according to ACLU research, which notes that nearly 40 percent of the lawmakers identified by the system are people of color. In a blog post, Jacob Snow, technology and civil liberties attorney for the ACLU of Northern California, said that the false matches were made against a mugshot database. The matches were also disproportionately people of color, he said. These include six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis, D-Ga.