Goto

Collaborating Authors

 discrimination


The DOJ is backing xAI in its lawsuit against Colorado

Engadget

The Department of Justice has announced that it's intervening on the behalf of xAI in the company's recent lawsuit against the state of Colorado. The law is set to go into effect in June, and the DOJ is now asking a Colorado District Court to declare it unconstitutional. In xAI's original argument, Colorado Bill SB24-205 violated the company's First Amendment rights by forcing its developers to change how they create AI products and compelling them to align their products with Colorado's views on diversity and discrimination. The DOJ acknowledges those concerns in its complaint, but specifically focuses its argument on the idea that the law violates the Equal Protection Clause of the Fourteenth Amendment. According to the DOJ, because the law relies on demographics and statistical disparities as evidence of discrimination, it will essentially require developers to distort an AI system's outputs and discriminate based on race, sex, religion and other protected characteristics, a violation of the Fourteenth Amendment.


Trump DOJ jumps into Musk xAI court battle as diversity fight heats up

FOX News

The DOJ joined Elon Musk's xAI in suing Colorado, alleging a state AI regulation law violates the First and Fourteenth amendments by forcing developers to adopt DEI ideology.


A principled approach for data bias mitigation

AIHub

How do you know if your data is fair? And if it isn't, what can you do about it? Machine learning models are increasingly used to make high-stakes decisions, from predicting who gets a loan to estimating the likelihood that someone will reoffend. But these models are only as good as the data they learn from [Shahbazi 2023]. If the training data is biased, the model's decisions will likely be biased too [Hort 2024, Pagano 2023].


Optimized Pre-Processing for Discrimination Prevention

Neural Information Processing Systems

Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy.


Equality of Opportunity in Supervised Learning

Neural Information Processing Systems

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.


Equality of Opportunity in Classification: A Causal Approach

Neural Information Processing Systems

The Equalized Odds (for short, EO) is one of the most popular measures of discrimination used in the supervised learning setting. It ascertains fairness through the balance of the misclassification rates (false positive and negative) across the protected groups -- e.g., in the context of law enforcement, an African-American defendant who would not commit a future crime will have an equal opportunity of being released, compared to a non-recidivating Caucasian defendant. Despite this noble goal, it has been acknowledged in the literature that statistical tests based on the EO are oblivious to the underlying causal mechanisms that generated the disparity in the first place (Hardt et al. 2016). This leads to a critical disconnect between statistical measures readable from the data and the meaning of discrimination in the legal system, where compelling evidence that the observed disparity is tied to a specific causal process deemed unfair by society is required to characterize discrimination. The goal of this paper is to develop a principled approach to connect the statistical disparities characterized by the EO and the underlying, elusive, and frequently unobserved, causal mechanisms that generated such inequality. We start by introducing a new family of counterfactual measures that allows one to explain the misclassification disparities in terms of the underlying mechanisms in an arbitrary, non-parametric structural causal model. This will, in turn, allow legal and data analysts to interpret currently deployed classifiers through causal lens, linking the statistical disparities found in the data to the corresponding causal processes. Leveraging the new family of counterfactual measures, we develop a learning procedure to construct a classifier that is statistically efficient, interpretable, and compatible with the basic human intuition of fairness. We demonstrate our results through experiments in both real (COMPAS) and synthetic datasets.



Equality of Opportunity in Classification: A Causal Approach

Junzhe Zhang, Elias Bareinboim

Neural Information Processing Systems

Despitethis noble goal, it has been acknowledged in the literature that statistical tests based ontheEOareoblivious totheunderlying causal mechanisms thatgenerated the disparity in the first place (Hardt et al. 2016).