Goto

Collaborating Authors

 criminality


A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems

Christia, Fotini, Han, Jessy Xinyi, Miller, Andrew, Shah, Devavrat, Watkins, S. Craig, Winship, Christopher

arXiv.org Machine Learning

We are interested in developing a data-driven method to evaluate race-induced biases in law enforcement systems. While the recent works have addressed this question in the context of police-civilian interactions using police stop data, they have two key limitations. First, bias can only be properly quantified if true criminality is accounted for in addition to race, but it is absent in prior works. Second, law enforcement systems are multi-stage and hence it is important to isolate the true source of bias within the "causal chain of interactions" rather than simply focusing on the end outcome; this can help guide reforms. In this work, we address these challenges by presenting a multi-stage causal framework incorporating criminality. We provide a theoretical characterization and an associated data-driven method to evaluate (a) the presence of any form of racial bias, and (b) if so, the primary source of such a bias in terms of race and criminality. Our framework identifies three canonical scenarios with distinct characteristics: in settings like (1) airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race; (2) AI-empowered policing, the primary source of observed bias against a race is likely to be bias in law enforcement against criminals of that race; and (3) police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race. Through an extensive empirical study using police-civilian interaction data and 911 call data, we find an instance of such a counter-intuitive phenomenon: in New Orleans, the observed bias is against the majority race and the likely reason for it is the over-reporting (via 911 calls) of incidents involving the minority race by the general public.


Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images

Fraser, Kathleen C., Kiritchenko, Svetlana

arXiv.org Artificial Intelligence

Following on recent advances in large language models (LLMs) and subsequent chat models, a new wave of large vision-language models (LVLMs) has emerged. Such models can incorporate images as input in addition to text, and perform tasks such as visual question answering, image captioning, story generation, etc. Here, we examine potential gender and racial biases in such systems, based on the perceived characteristics of the people in the input images. To accomplish this, we present a new dataset PAIRS (PArallel Images for eveRyday Scenarios). The PAIRS dataset contains sets of AI-generated images of people, such that the images are highly similar in terms of background and visual content, but differ along the dimensions of gender (man, woman) and race (Black, white). By querying the LVLMs with such images, we observe significant differences in the responses according to the perceived gender or race of the person depicted.


Guarding the Guardians: Automated Analysis of Online Child Sexual Abuse

Puentes, Juanita, Castillo, Angela, Osejo, Wilmar, Calderón, Yuly, Quintero, Viviana, Saldarriaga, Lina, Agudelo, Diana, Arbeláez, Pablo

arXiv.org Artificial Intelligence

Online violence against children has increased globally recently, demanding urgent attention. Competent authorities manually analyze abuse complaints to comprehend crime dynamics and identify patterns. However, the manual analysis of these complaints presents a challenge because it exposes analysts to harmful content during the review process. Given these challenges, we present a novel solution, an automated tool designed to analyze children's sexual abuse reports comprehensively. By automating the analysis process, our tool significantly reduces the risk of exposure to harmful content by categorizing the reports on three dimensions: Subject, Degree of Criminality, and Damage. Furthermore, leveraging our multidisciplinary team's expertise, we introduce a novel approach to annotate the collected data, enabling a more in-depth analysis of the reports. This approach improves the comprehension of fundamental patterns and trends, enabling law enforcement agencies and policymakers to create focused strategies in the fight against children's violence.


The crime of being poor

Curto, Georgina, Kiritchenko, Svetlana, Nejadgholi, Isar, Fraser, Kathleen C.

arXiv.org Artificial Intelligence

The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected.


What facial recognition and the racist pseudoscience of phrenology have in common

#artificialintelligence

'Phrenology' has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We'd like to think that judging people's worth based on the size and shape of their skull is a practice that's well behind us. However, phrenology is once again rearing its lumpy head. In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people's appearance.



An Algorithm That 'Predicts' Criminality Based on a Face Sparks a Furor

#artificialintelligence

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher. With "80 percent accuracy and with no racial bias," the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict "if someone is a criminal based solely on a picture of their face." The press release has since been deleted from the university website. Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.


Global Big Data Conference

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.


AI experts warn against crime prediction algorithms, saying there are no 'physical features to criminality'

The Independent - Tech

A number of AI researchers, data scientists, sociologists, and historians have written an open letter to end the publishing of research that claims artificial intelligence or facial recognition can predict whether a person is likely to be a criminal. The letter, signed by over 1000 experts, argues that data generated by the criminal justice system cannot be used to "identify criminals" or predict behaviour. Historical court and arrest data reflect the policies and practises of the criminal justice system and are therefore biased, the experts say. "These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences," the letter reads. Moreover, by continuing these studies, "'criminality' operates as a proxy for race due to racially discriminatory practices in law enforcement and criminal justice, research of this nature creates dangerous feedback loops" the letter says.


AI Ethics Coalition Condemn Criminality Prediction Algorithms

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.