Explainable post-training bias mitigation with distribution-based fairness metrics
Franks, Ryan, Miroshnikov, Alexey
–arXiv.org Artificial Intelligence
Machine learning (ML) techniques have become ubiquitous in the financial industry due to their powerful predictive performance. However, ML model outputs may lead to certain types of unintended bias, which are measures of unfairness that impact protected sub-populations. Predictive models, and strategies that rely on such models, are subject to laws and regulations that ensure fairness. For instance, financial institutions (FIs) in the U.S. that are in the business of extending credit to applicants are subject to the Equal Credit Opportunity Act (ECOA) [14] and the Fair Housing Act (FHA) [13], which prohibit discrimination in credit offerings and housing transactions. The protected classes identified in the laws, including race, gender, age (subject to very limited exceptions), ethnicity, national origin, and material status, cannot be used as attributes in lending decisions.
arXiv.org Artificial Intelligence
Apr-1-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- North America > United States
- California > Los Angeles County
- Los Angeles (0.04)
- Illinois > Cook County
- Chicago (0.04)
- New York > New York County
- New York City (0.14)
- California > Los Angeles County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Technology: