Goto

Collaborating Authors

 subpopulation


When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness

Neural Information Processing Systems

Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions.




Appendix

Neural Information Processing Systems

CelebA is a well-known large-scale face dataset. Same as previous works [41, 58], we employ this dataset to predict the color of the human hair as "blond" or "not blond".


f593c9c251d4d7cf14d4ab9861dfb7eb-Paper-Conference.pdf

Neural Information Processing Systems

However, some recent studies haverecognized that most ofthese approaches failtoimprovethe performance over empirical risk minimization especially when applied to overparameterized neural networks.