Goto

Collaborating Authors

 subpopulation


Empirical Likelihood-Based Fairness Auditing: Distribution-Free Certification and Flagging

Tang, Jie, Xie, Chuanlong, Zeng, Xianli, Zhu, Lixing

arXiv.org Machine Learning

Machine learning models in high-stakes applications, such as recidivism prediction and automated personnel selection, often exhibit systematic performance disparities across sensitive subpopulations, raising critical concerns regarding algorithmic bias. Fairness auditing addresses these risks through two primary functions: certification, which verifies adherence to fairness constraints; and flagging, which isolates specific demographic groups experiencing disparate treatment. However, existing auditing techniques are frequently limited by restrictive distributional assumptions or prohibitive computational overhead. We propose a novel empirical likelihood-based (EL) framework that constructs robust statistical measures for model performance disparities. Unlike traditional methods, our approach is non-parametric; the proposed disparity statistics follow asymptotically chi-square or mixed chi-square distributions, ensuring valid inference without assuming underlying data distributions. This framework uses a constrained optimization profile that admits stable numerical solutions, facilitating both large-scale certification and efficient subpopulation discovery. Empirically, the EL methods outperform bootstrap-based approaches, yielding coverage rates closer to nominal levels while reducing computational latency by several orders of magnitude. We demonstrate the practical utility of this framework on the COMPAS dataset, where it successfully flags intersectional biases, specifically identifying a significantly higher positive prediction rate for African-American males under 25 and a systemic under-prediction for Caucasian females relative to the population mean.


Characterizing Datapoints via Second-Split Forgetting

Neural Information Processing Systems

Researchers investigating example hardness have increasingly focused on the dynamics by which neural networks learn and forget examples throughout training. Popular metrics derived from these dynamics include (i) the epoch at which examples are first correctly classified; (ii) the number of times their predictions flip during training; and (iii) whether their prediction flips if they are held out. However, these metrics do not distinguish among examples that are hard for distinct reasons, such as membership in a rare subpopulation, being mislabeled, or belonging to a complex subpopulation. In this paper, we propose (SSFT), a complementary metric that tracks the epoch (if any) after which an original training example is forgotten as the network is fine-tuned on a randomly held out partition of the data. Across multiple benchmark datasets and modalities, we demonstrate that examples are forgotten quickly, and seemingly examples are forgotten comparatively slowly. By contrast, metrics only considering the first split learning dynamics struggle to differentiate the two. At large learning rates, SSFT tends to be robust across architectures, optimizers, and random seeds. From a practical standpoint, the SSFT can (i) help to identify mislabeled samples, the removal of which improves generalization; and (ii) provide insights about failure modes. Through theoretical analysis addressing overparameterized linear models, we provide insights into how the observed phenomena may arise.


Unintended Selection: Persistent Qualification Rate Disparities and Interventions

Neural Information Processing Systems

Realistically---and equitably---modeling the dynamics of group-level disparities in machine learning remains an open problem. In particular, we desire models that do not suppose inherent differences between artificial groups of people---but rather endogenize disparities by appeal to unequal initial conditions of insular subpopulations. In this paper, agents each have a real-valued feature $X$ (e.g., credit score) informed by a ``true'' binary label $Y$ representing qualification (e.g., for a loan). Each agent alternately (1) receives a binary classification label $\hat{Y}$ (e.g., loan approval) from a Bayes-optimal machine learning classifier observing $X$ and (2) may update their qualification $Y$ by imitating successful strategies (e.g., seek a raise) within an isolated group $G$ of agents to which they belong. We consider the disparity of qualification rates $\Pr(Y=1)$ between different groups and how this disparity changes subject to a sequence of Bayes-optimal classifiers repeatedly retrained on the global population. We model the evolving qualification rates of each subpopulation (group) using the replicator equation, which derives from a class of imitation processes. We show that differences in qualification rates between subpopulations can persist indefinitely for a set of non-trivial equilibrium states due to uniformed classifier deployments, even when groups are identical in all aspects except initial qualification densities. We next simulate the effects of commonly proposed fairness interventions on this dynamical system along with a new feedback control mechanism capable of permanently eliminating group-level qualification rate disparities. We conclude by discussing the limitations of our model and findings and by outlining potential future work.


SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation

Neural Information Processing Systems

Disaggregated evaluation--estimation of performance of a machine learning model on different subpopulations--is a core task when assessing performance and group-fairness of AI systems.A key challenge is that evaluation data is scarce, and subpopulations arising from intersections of attributes (e.g., race, sex, age) are often tiny.Today, it is common for multiple clients to procure the same AI model from a model developer, and the task of disaggregated evaluation is faced by each customer individually. This gives rise to what we call the, wherein multiple clients seek to conduct a disaggregated evaluation of a given model in their own data setting (task).


SafeFix: Targeted Model Repair via Controlled Image Generation

Xu, Ouyang, Zhang, Baoming, Mao, Ruiyu, Guo, Yunhui

arXiv.org Artificial Intelligence

Deep learning models for visual recognition often exhibit systematic errors due to underrepresented semantic subpopulations. Although existing debugging frameworks can pinpoint these failures by identifying key failure attributes, repairing the model effectively remains difficult. Current solutions often rely on manually designed prompts to generate synthetic training images -- an approach prone to distribution shift and semantic errors. To overcome these challenges, we introduce a model repair module that builds on an interpretable failure attribution pipeline. Our approach uses a conditional text-to-image model to generate semantically faithful and targeted images for failure cases. To preserve the quality and relevance of the generated samples, we further employ a large vision-language model (LVLM) to filter the outputs, enforcing alignment with the original data distribution and maintaining semantic consistency. By retraining vision models with this rare-case-augmented synthetic dataset, we significantly reduce errors associated with rare cases. Our experiments demonstrate that this targeted repair strategy improves model robustness without introducing new bugs. Code is available at https://github.com/oxu2/SafeFix


Individual and group fairness in geographical partitioning

Ryzhov, Ilya O., Carlsson, John Gunnar, Zhu, Yinchu

arXiv.org Artificial Intelligence

Consider a service system in which individuals are served by facilities at different locations within a geographical region. For example, the facilities could represent schools, polling places, or commercial fulfillment centers. The geographical partitioning problem (Carlsson & Devulapalli 2013) divides the region into non-overlapping districts, such that all individuals residing in the same district are served by the same facility. The goal is to choose a partition that optimizes some measure of social welfare, most commonly the average travel cost per individual (Carlsson et al. 2016). We formulate and study a novel variant of this problem where the population is heterogeneous, consisting of multiple demographic groups, each with a different spatial distribution throughout the region. Again we optimize the expected cost, but now we also impose a new group fairness condition: each subpopulation can be neither over-nor under-represented at any facility. In other words, the districts are designed in such a way that the proportion of the population belonging to a particular group in any district must match that group's incidence in the entire population. This condition is also known as "demographic parity" in the literature (Dwork et al. 2012).


When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness

Neural Information Processing Systems

Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions.




Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization James Oldfield

Neural Information Processing Systems

An important corollary of successful task decomposition amongst experts is that layers are easier to debug and edit. Biased or unsafe behaviors can be better localized to specific experts' subcomputation, facilitating manual correction or surgery in a way that minimally affects the other functionality of the network. Addressing such behaviors is particularly crucial in the context of foundation models; being often fine-tuned as black boxes pre-trained on unknown, potentially imbalanced data distributions.