Goto

Collaborating Authors

 fairness


Olympic gold medalists rip Newsom for California's trans athlete situation ahead of protested track meet

FOX News

Another LIV golfer remains committed to staying put: 'I have full faith in the future of LIV' Megan Rapinoe, in a shock to no one, backs Angel Reese skipping interviews as'taking power back' White House calls out Newsom as California girls' track and field controversy reignites Here's why the coaches association's 24-team College Football Playoff could ruin the sport Boston Celtics star Jaylen Brown tells ESPN's Stephen A Smith to'be quiet and retire' President Trump on $1,000 World Cup ticket prices: 'I wouldn't pay it either, to be honest' Pirates vs. Diamondbacks betting preview targets the under as both offenses go cold in series Former LSU coach Brian Kelly uses AI to prepare for job interviews, proving he's just like the rest of us Political violence should'never be normalized': Former California GOP chairwoman UAE says air defenses are active after US conducts'self-defense' strikes in Iran Bob Lazar said S4 was the'most unpleasant place' to be, documentary director recalls Former U.S. attorney explains why he thinks Tyler Robinson's defense team is playing the long game Greg Gutfeld: Dems can't admit they have a problem Mark Hamill is a'miserable human being': Sage Steele AOC is in'favor' of'robbing' the American people: Tiffany Smiley Iran's playbook is to talk and then fight, Lt Gen Keith Kellogg says Watters: If Iran doesn't sign this fast, the US will be a lot more violent OutKick Olympic gold medalists rip Newsom for California's trans athlete situation ahead of protested track meet California girls' track and field student-athletes protest trans inclusion ahead of state meet California high school student-athletes Olivia Viola and Reese Hogan speak at a rally ahead of a major track and field event to oppose trans athletes in their sports. Three-time Olympic women's gold medalists Nancy Hogshead and Kaillie Humphries have spoken out on the growing girls' track and field controversy in California, as a trans athlete is looking to defend a pair of state titles. Hogshead spoke out against California Gov. Gavin Newsom for his state's policies that continue to allow trans athletes in women's sports. The medalist responded to a statement from a source within Newsom's office on the issue that stated, The Governor has said discussions on this issue should be guided by fairness, dignity, and respect. Governor Newsom seems to exclude girls from his own standard of'fairness, dignity and respect.'


White House calls out Newsom as California girls' track and field controversy reignites

FOX News

Megan Rapinoe, in a shock to no one, backs Angel Reese skipping interviews as'taking power back' Here's why the coaches association's 24-team College Football Playoff could ruin the sport Boston Celtics star Jaylen Brown tells ESPN's Stephen A Smith to'be quiet and retire' President Trump on $1,000 World Cup ticket prices: 'I wouldn't pay it either, to be honest' Pirates vs. Diamondbacks betting preview targets the under as both offenses go cold in series Former LSU coach Brian Kelly uses AI to prepare for job interviews, proving he's just like the rest of us Newsom office source responds to planned protest against trans athlete at state playoff girls' track meet US waits for Iran's response on peace proposal Authorities try to'connect the dots' on hantavirus infections Jesse Watters: Spencer Pratt is a'charismatic, common-sense populist' Greg Gutfeld: Dana White laughs off the'toxic masculinity thing' Iranians are fearful of facing the regime's frustration and anger after the war, activist says OutKick White House calls out Newsom as California girls' track and field controversy reignites Spokeswoman called Newsom'a truly sick individual who has no regard for fairness, dignity, and respect' Jurupa Valley High School graduate Hadeel Hazameh responded to the news that the Trump administration has launched a Title IX investigation into her district over an incident involving trans volleyball teammate, which has resulted in her graduating early and leaving her sports career behind. President Donald Trump's White House has officially put California Gov. Gavin Newsom on notice as a controversial girls' track and field postseason is set to begin this weekend. A White House spokesperson called out Newsom in a statement to Fox News Digital as his state continues to allow biological male trans athletes to compete in girls' high school sports. Gavin Newscum is a truly sick individual who has no regard for fairness, dignity, and respect. If he did, he wouldn't allow men to compete in women's sports, limiting women's opportunities and jeopardizing their health and safety.


Fairness Constraints in High-Dimensional Generalized Linear Models

Lin, Yixiao, Booth, James

arXiv.org Machine Learning

Machine learning models often inherit biases from historical data, raising critical concerns about fairness and accountability. Conventional fairness interventions typically require access to sensitive attributes like gender or race, but privacy and legal restrictions frequently limit their use. To address this challenge, we propose a framework that infers sensitive attributes from auxiliary features and integrates fairness constraints into model training. Our approach mitigates bias while preserving predictive accuracy, offering a practical solution for fairness-aware learning. Empirical evaluations validate its effectiveness, contributing to the advancement of more equitable algorithmic decision-making.


Demographic Parity Tails for Regression

Le, Naht Sinh, Denis, Christophe, Hebiri, Mohamed

arXiv.org Machine Learning

Demographic parity (DP) is a widely studied fairness criterion in regression, enforcing independence between the predictions and sensitive attributes. However, constraining the entire distribution can degrade predictive accuracy and may be unnecessary for many applications, where fairness concerns are localized to specific regions of the distribution. To overcome this issue, we propose a new framework for regression under DP that focuses on the tails of target distribution across sensitive groups. Our methodology builds on optimal transport theory. By enforcing fairness constraints only over targeted regions of the distribution, our approach enables more nuanced and context-sensitive interventions. Leveraging recent advances, we develop an interpretable and flexible algorithm that leverages the geometric structure of optimal transport. We provide theoretical guarantees, including risk bounds and fairness properties, and validate the method through experiments in regression settings.


Fair regression under localized demographic parity constraints

Charpentier, Arthur, Denis, Christophe, Elie, Romuald, Hebiri, Mohamed, HU, François

arXiv.org Machine Learning

Demographic parity (DP) is a widely used group fairness criterion requiring predictive distributions to be invariant across sensitive groups. While natural in classification, full distributional DP is often overly restrictive in regression and can lead to substantial accuracy loss. We propose a relaxation of DP tailored to regression, enforcing parity only at a finite set of quantile levels and/or score thresholds. Concretely, we introduce a novel (${\ell}$, Z)-fair predictor, which imposes groupwise CDF constraints of the form F f |S=s (z m ) = ${\ell}$ m for prescribed pairs (${\ell}$ m , z m ). For this setting, we derive closed-form characterizations of the optimal fair discretized predictor via a Lagrangian dual formulation and quantify the discretization cost, showing that the risk gap to the continuous optimum vanishes as the grid is refined. We further develop a model-agnostic post-processing algorithm based on two samples (labeled for learning a base regressor and unlabeled for calibration), and establish finite-sample guarantees on constraint violation and excess penalized risk. In addition, we introduce two alternative frameworks where we match group and marginal CDF values at selected score thresholds. In both settings, we provide closed-form solutions for the optimal fair discretized predictor. Experiments on synthetic and real datasets illustrate an interpretable fairness-accuracy trade-off, enabling targeted corrections at decision-relevant quantiles or thresholds while preserving predictive performance.



A Model Ensemble-Based Post-Processing Framework for Fairness-Aware Prediction

Zhao, Zhouting, Ng, Tin Lok James

arXiv.org Machine Learning

Striking an optimal balance between predictive performance and fairness continues to be a fundamental challenge in machine learning. In this work, we propose a post-processing framework that facilitates fairness-aware prediction by leveraging model ensembling. Designed to operate independently of any specific model internals, our approach is widely applicable across various learning tasks, model architectures, and fairness definitions. Through extensive experiments spanning classification, regression, and survival analysis, we demonstrate that the framework effectively enhances fairness while maintaining, or only minimally affecting, predictive accuracy.


A principled approach for data bias mitigation

AIHub

How do you know if your data is fair? And if it isn't, what can you do about it? Machine learning models are increasingly used to make high-stakes decisions, from predicting who gets a loan to estimating the likelihood that someone will reoffend. But these models are only as good as the data they learn from [Shahbazi 2023]. If the training data is biased, the model's decisions will likely be biased too [Hort 2024, Pagano 2023].


Recycling Privileged Learning and Distribution Matching for Fairness

Neural Information Processing Systems

Equipping machine learning models with ethical and legal constraints is a serious issue; without this, the future of machine learning is at risk. This paper takes a step forward in this direction and focuses on ensuring machine learning models deliver fair decisions. In legal scholarships, the notion of fairness itself is evolving and multi-faceted. We set an overarching goal to develop a unified machine learning framework that is able to handle any definitions of fairness, their combinations, and also new definitions that might be stipulated in the future. To achieve our goal, we recycle two well-established machine learning techniques, privileged learning and distribution matching, and harmonize them for satisfying multi-faceted fairness definitions.


Equality of Opportunity in Classification: A Causal Approach

Neural Information Processing Systems

The Equalized Odds (for short, EO) is one of the most popular measures of discrimination used in the supervised learning setting. It ascertains fairness through the balance of the misclassification rates (false positive and negative) across the protected groups -- e.g., in the context of law enforcement, an African-American defendant who would not commit a future crime will have an equal opportunity of being released, compared to a non-recidivating Caucasian defendant. Despite this noble goal, it has been acknowledged in the literature that statistical tests based on the EO are oblivious to the underlying causal mechanisms that generated the disparity in the first place (Hardt et al. 2016). This leads to a critical disconnect between statistical measures readable from the data and the meaning of discrimination in the legal system, where compelling evidence that the observed disparity is tied to a specific causal process deemed unfair by society is required to characterize discrimination. The goal of this paper is to develop a principled approach to connect the statistical disparities characterized by the EO and the underlying, elusive, and frequently unobserved, causal mechanisms that generated such inequality. We start by introducing a new family of counterfactual measures that allows one to explain the misclassification disparities in terms of the underlying mechanisms in an arbitrary, non-parametric structural causal model. This will, in turn, allow legal and data analysts to interpret currently deployed classifiers through causal lens, linking the statistical disparities found in the data to the corresponding causal processes. Leveraging the new family of counterfactual measures, we develop a learning procedure to construct a classifier that is statistically efficient, interpretable, and compatible with the basic human intuition of fairness. We demonstrate our results through experiments in both real (COMPAS) and synthetic datasets.