Goto

Collaborating Authors

 Putzel, Preston


Fair Generalized Linear Models with a Convex Penalty

arXiv.org Machine Learning

Despite recent advances in algorithmic fairness, To address these issues there has recently been a significant methodologies for achieving fairness with generalized body of work in the machine learning community on linear models (GLMs) have yet to be algorithmic fairness in the context of predictive modeling, explored in general, despite GLMs being widely including (i) data preprocessing methods that try to reduce used in practice. In this paper we introduce two disparities, (ii) in-process approaches which enforce fairness fairness criteria for GLMs based on equalizing during model training, and (iii) post-process approaches expected outcomes or log-likelihoods. We prove which adjust a model's predictions to achieve fairness after that for GLMs both criteria can be achieved via training is completed. However, the majority of this work a convex penalty term based solely on the linear has focused on classification problems with binary outcome components of the GLM, thus permitting efficient variables, and to a lesser extent on regression.


Blackbox Post-Processing for Multiclass Fairness

arXiv.org Artificial Intelligence

Applying standard machine learning approaches for classification can produce unequal results across different demographic groups. When then used in real-world settings, these inequities can have negative societal impacts. This has motivated the development of various approaches to fair classification with machine learning models in recent years. In this paper, we consider the problem of modifying the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting. To accomplish this, we extend the 'post-processing' approach in Hardt et al. 2016, which focuses on fairness for binary classification, to the setting of fair multiclass classification. We explore when our approach produces both fair and accurate predictions through systematic synthetic experiments and also evaluate discrimination-fairness tradeoffs on several publicly available real-world application datasets. We find that overall, our approach produces minor drops in accuracy and enforces fairness when the number of individuals in the dataset is high relative to the number of classes and protected groups.