Fair Generalized Linear Models with a Convex Penalty
Do, Hyungrok, Putzel, Preston, Martin, Axel, Smyth, Padhraic, Zhong, Judy
Despite recent advances in algorithmic fairness, To address these issues there has recently been a significant methodologies for achieving fairness with generalized body of work in the machine learning community on linear models (GLMs) have yet to be algorithmic fairness in the context of predictive modeling, explored in general, despite GLMs being widely including (i) data preprocessing methods that try to reduce used in practice. In this paper we introduce two disparities, (ii) in-process approaches which enforce fairness fairness criteria for GLMs based on equalizing during model training, and (iii) post-process approaches expected outcomes or log-likelihoods. We prove which adjust a model's predictions to achieve fairness after that for GLMs both criteria can be achieved via training is completed. However, the majority of this work a convex penalty term based solely on the linear has focused on classification problems with binary outcome components of the GLM, thus permitting efficient variables, and to a lesser extent on regression.
Jun-17-2022
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: