Explainable post-training bias mitigation with distribution-based fairness metrics

Franks, Ryan, Miroshnikov, Alexey

arXiv.org Artificial Intelligence 

Machine learning (ML) techniques have become ubiquitous in the financial industry due to their powerful predictive performance. However, ML model outputs may lead to certain types of unintended bias, which are measures of unfairness that impact protected sub-populations. Predictive models, and strategies that rely on such models, are subject to laws and regulations that ensure fairness. For instance, financial institutions (FIs) in the U.S. that are in the business of extending credit to applicants are subject to the Equal Credit Opportunity Act (ECOA) [14] and the Fair Housing Act (FHA) [13], which prohibit discrimination in credit offerings and housing transactions. The protected classes identified in the laws, including race, gender, age (subject to very limited exceptions), ethnicity, national origin, and material status, cannot be used as attributes in lending decisions.