Collaborating Authors


How is bias built into algorithms? Garbage in, garbage out.


In facial recognition and AI development, computers are trained on massive sets of data, millions of pictures gathered from all over the web. There are only a few publicly available datasets, and a lot of organizations use them. He and Abeba Birhane, at University College Dublin, published a paper recently examining these academic datasets. Most of the pictures are gathered without consent, people can be identified in them and there are racist and pornographic images and text. And even the idea of labeling someone a lawyer or a woman or a criminal based on appearance?

Abolish the #TechToPrisonPipeline


The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

U.S. Activists Fault Face Recognition in Wrongful Arrest for First Time

U.S. News

Robert Williams spent over a day in custody in January after face recognition software matched his driver's license photo to surveillance video of someone shoplifting, the American Civil Liberties Union of Michigan (ACLU) said in the complaint. In a video shared by ACLU, Williams says officers released him after acknowledging "the computer" must have been wrong.

AI researchers say scientific publishers help perpetuate racist algorithms

MIT Technology Review

The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled "A Deep Neural Network Model to Predict Criminality Using Image Processing," presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference. The demands: Citing the work of leading Black AI scholars, the letter debunks the scientific basis of the paper and asserts that crime-prediction technologies are racist. It also lists three demands: 1) for Springer Nature to rescind its offer to publish the study; 2) for it to issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research; and 3) for all scientific publishers to commit to not publishing similar papers in the future.

IBM gives up on face-recognition business – will other firms follow?

New Scientist

Software giant IBM has announced that it is withdrawing certain face-recognition products from the market, and has called for a "national dialogue" about the technology's use by US law enforcement agencies. The move, which comes as global protests against racism and police brutality enter their third week, is a marked change in the company's approach to face recognition. In November 2019, IBM said "blanket bans on technology are not the answer to concerns around specific use cases". Flaws in face-recognition technology are well documented. A 2018 study by researchers at the Massachusetts Institute of Technology and Microsoft found that dark-skinned women are misidentified by such systems 35 per cent of the time.

Amazon facial recognition falsely matches more than 100 politicians to arrested criminals

The Independent - Tech

Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.

Why algorithms can be racist and sexist


Humans are error-prone and biased, but that doesn't mean that algorithms are necessarily better. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your home's risk of fire. But these systems can be biased based on who builds them, how they're developed, and how they're ultimately used. This is commonly known as algorithmic bias. It's tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box.

Politics of Adversarial Machine Learning Machine Learning

In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends

Despite what you may think, face recognition surveillance isn't inevitable


Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.

Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments Artificial Intelligence

As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.