Goto

Collaborating Authors

Results


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Amazon facial recognition falsely matches more than 100 politicians to arrested criminals

The Independent - Tech

Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.


Why algorithms can be racist and sexist

#artificialintelligence

Humans are error-prone and biased, but that doesn't mean that algorithms are necessarily better. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your home's risk of fire. But these systems can be biased based on who builds them, how they're developed, and how they're ultimately used. This is commonly known as algorithmic bias. It's tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box.


Politics of Adversarial Machine Learning

arXiv.org Machine Learning

In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends


Despite what you may think, face recognition surveillance isn't inevitable

#artificialintelligence

Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.


Microsoft and Amazon are at the center of an ACLU lawsuit on facial recognition

#artificialintelligence

The American Civil Liberties Union (ACLU) is pressing forward with a lawsuit involving the facial recognition software offered by Amazon and Microsoft to government clients. In a complaint filed in a Massachusetts federal court, the ACLU asked for a variety of different records from the government, including inquiries to companies, meetings about the piloting or testing of facial recognition, voice recognition, and gait recognition technology, requests for proposals, and licensing agreements. At the heart of the lawsuit are Amazon's Rekognition and Microsoft's Face API, both facial recognition products that are available for customers of the companies' cloud platforms. The ACLU has also asked for more details on the US government's use of voice recognition and gait recognition, which is the automated process of comparing images of the way a person walks in order to identify them. Police in Shanghai and Beijing are already using gait-analysis tools to identify people.


Artificial Intelligence Can Be Biased. Here's What You Should Know.

#artificialintelligence

Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managers, courts, law enforcement, and hospitals -- sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it's fast becoming clear that the algorithms harbor bias, too. It's an issue Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology, knows about firsthand. She founded the Algorithmic Justice League to draw attention to the issue, and earlier this year she testified at a congressional hearing on the impact of facial recognition technology on civil rights. "One of the major issues with algorithmic bias is you may not know it's happening," Buolamwini told FRONTLINE.


Why facial recognition's racial bias problem is so hard to crack

#artificialintelligence

Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.


Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices

arXiv.org Artificial Intelligence

There has been rapidly growing interest in the use of algorithms for employment assessment, especially as a means to address or mitigate bias in hiring. Yet, to date, little is known about how these methods are being used in practice. How are algorithmic assessments built, validated, and examined for bias? In this work, we document and assess the claims and practices of companies offering algorithms for employment assessment, using a methodology that can be applied to evaluate similar applications and issues of bias in other domains. In particular, we identify vendors of algorithmic pre-employment assessments (i.e., algorithms to screen candidates), document what they have disclosed about their development and validation procedures, and evaluate their techniques for detecting and mitigating bias. We find that companies' formulation of "bias" varies, as do their approaches to dealing with it. We also discuss the various choices vendors make regarding data collection and prediction targets, in light of the risks and trade-offs that these choices pose. We consider the implications of these choices and we raise a number of technical and legal considerations.


Making face recognition less biased doesn't make it less scary

MIT Technology Review

In the past few years, there's been a dramatic rise in the adoption of face recognition, detection, and analysis technology. You're probably most familiar with recognition systems, like Facebook's photo-tagging recommender and Apple's FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance. Many people believe that such systems are both highly accurate and impartial.