Goto

Collaborating Authors

 recidivism



IncorporatingInterpretableOutputConstraints inBayesianNeuralNetworks

Neural Information Processing Systems

The ability to encode informative functional beliefs in BNN priors can significantly reduce the bias and uncertainty of the posterior predictive, especially in regions of input space sparsely coveredbytraining data[27].



seeBias: A Comprehensive Tool for Assessing and Visualizing AI Fairness

Ning, Yilin, Ma, Yian, Liu, Mingxuan, Li, Xin, Liu, Nan

arXiv.org Artificial Intelligence

Fairness in artificial intelligence (AI) prediction models is increasingly emphasized to support responsible adoption in high-stakes domains such as health care and criminal justice. Guidelines and implementation frameworks highlight the importance of both predictive accuracy and equitable outcomes. However, current fairness toolkits often evaluate classification performance disparities in isolation, with limited attention to other critical aspects such as calibration. To address these gaps, we present seeBias, an R package for comprehensive evaluation of model fairness and predictive performance. seeBias offers an integrated evaluation across classification, calibration, and other performance domains, providing a more complete view of model behavior. It includes customizable visualizations to support transparent reporting and responsible AI implementation. Using public datasets from criminal justice and healthcare, we demonstrate how seeBias supports fairness evaluations, and uncovers disparities that conventional fairness metrics may overlook. The R package is available on GitHub, and a Python version is under development.


How Aligned are Generative Models to Humans in High-Stakes Decision-Making?

Tan, Sarah, Mallari, Keri, Adebayo, Julius, Gordo, Albert, Wells, Martin T., Inkpen, Kori

arXiv.org Artificial Intelligence

Large generative models (LMs) are increasingly being considered for high-stakes decision-making. This work considers how such models compare to humans and predictive AI models on a specific case of recidivism prediction. We combine three datasets -- COMPAS predictive AI risk scores, human recidivism judgements, and photos -- into a dataset on which we study the properties of several state-of-the-art, multimodal LMs. Beyond accuracy and bias, we focus on studying human-LM alignment on the task of recidivism prediction. We investigate if these models can be steered towards human decisions, the impact of adding photos, and whether anti-discimination prompting is effective. We find that LMs can be steered to outperform humans and COMPAS using in context-learning. We find anti-discrimination prompting to have unintended effects, causing some models to inhibit themselves and significantly reduce their number of positive predictions.


Rethinking recidivism through a causal lens

Shirvaikar, Vik, Lakshminarayan, Choudur

arXiv.org Artificial Intelligence

Predictive modeling of criminal recidivism, or whether people will re-offend in the future, has a long and contentious history. Modern causal inference methods allow us to move beyond prediction and target the "treatment effect" of a specific intervention on an outcome in an observational dataset. In this paper, we look specifically at the effect of incarceration (prison time) on recidivism, using a well-known dataset from North Carolina. Two popular causal methods for addressing confounding bias are explained and demonstrated: directed acyclic graph (DAG) adjustment and double machine learning (DML), including a sensitivity analysis for unobserved confounders. We find that incarceration has a detrimental effect on recidivism, i.e., longer prison sentences make it more likely that individuals will re-offend after release, although this conclusion should not be generalized beyond the scope of our data. We hope that this case study can inform future applications of causal inference to criminal justice analysis.


Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations

Zhao, Yuying, Wang, Yu, Derr, Tyler

arXiv.org Artificial Intelligence

While machine learning models have achieved unprecedented success in real-world applications, they might make biased/unfair decisions for specific demographic groups and hence result in discriminative outcomes. Although research efforts have been devoted to measuring and mitigating bias, they mainly study bias from the result-oriented perspective while neglecting the bias encoded in the decision-making procedure. This results in their inability to capture procedure-oriented bias, which therefore limits the ability to have a fully debiasing method. Fortunately, with the rapid development of explainable machine learning, explanations for predictions are now available to gain insights into the procedure. In this work, we bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations. We identify the procedure-based bias by measuring the gap of explanation quality between different groups with Ratio-based and Value-based Explanation Fairness. The new metrics further motivate us to design an optimization objective to mitigate the procedure-based bias where we observe that it will also mitigate bias from the prediction. Based on our designed optimization objective, we propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance. Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed CFA and highlight the importance of considering fairness from the explainability perspective. Our code is publicly available at https://github.com/YuyingZhao/FairExplanations-CFA .


Brave Behind Bars: Prison education program focuses on computing skills for women

#artificialintelligence

One of the co-founders, Martin Nisser, a PhD student from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), explains the digital literacy and self-efficacy focused objectives: "Some of the women haven't had the opportunity to work with a computer for 25 years, and aren't yet accustomed to using the internet. We're working with them to build their capabilities with these modern tools in order to prepare them for life outside," says Nisser. Even for the students who became incarcerated more recently, it can be difficult to keep up with the fast pace of technological advances, since technical programs in correctional facilities are few and far-between. This scarcity of preparatory programs undoubtedly contributes to high and rising recidivism rates: More often than not, those who are released from prison eventually return. While working at TEJI, Nisser had a fortuitous meeting with his two co-founders, Marisa Gaetz (a PhD student from MIT's Department of Mathematics) and Emily Harburg (co-founder of Brave Initiatives, a nonprofit that develops coding bootcamps for young women).


Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo

#artificialintelligence

AI is becoming more and more prevalent in society, with many people wondering how it will affect the law. How artificial intelligence is impacting our laws and what we can expect for future technology/legal interactions. The conversation surrounding the relationship between AI and law also touches quite clearly on the ability to rely on Artificial Intelligence to deliver fair decisions and to enhance the legal system's delivery of equity and justice. In this article, I share insights from my conversations on this topic with Joilson Melo, a Brazilian law expert, and programmer whose devotion to equity and fairness led to a historic change in the Brazilian legal system in 2019, this change mainly affected the system that controls all processes processed digitally in Brazil, the PJe (Electronic Judicial Process). As a law student, Melo filed a request for action in the National Council of Justice (CNJ) against the Court of Justice of Mato Grosso, resulting in a decision allowing citizens to file applications in court electronically without a lawyer and within the Special Court, observing the value of the case, so that it does not exceed 20 minimum wages.


AI and EI as allies

#artificialintelligence

Both artificial intelligence (AI) and emotional intelligence (EI) have critical roles to play in security. At the same time, Maureen Metcalf of the Forbes Coaches Council published leadership trends for 2021. They involve economic instability, erosion of trust in societal institutions, and decreasing worker privacy as the office moves home. The trick for security professionals is to join the skills and mindsets that constitute the leadership list to the phenomena that make up the security megatrends. The gap between the two -- which threatens to become a chasm during these times of tectonic shifts -- must be bridged for security professionals not to be left behind.