sensitive characteristic
Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making
In an era characterized by the pervasive integration of artificial intelligence into decision-making processes across diverse industries, the demand for trust has never been more pronounced. This thesis embarks on a comprehensive exploration of bias and fairness, with a particular emphasis on their ramifications within the banking sector, where AI-driven decisions bear substantial societal consequences. In this context, the seamless integration of fairness, explainability, and human oversight is of utmost importance, culminating in the establishment of what is commonly referred to as "Responsible AI". This emphasizes the critical nature of addressing biases within the development of a corporate culture that aligns seamlessly with both AI regulations and universal human rights standards, particularly in the realm of automated decision-making systems. Nowadays, embedding ethical principles into the development, training, and deployment of AI models is crucial for compliance with forthcoming European regulations and for promoting societal good. This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias. These contributions are validated through their practical application in real-world scenarios, in collaboration with Intesa Sanpaolo. This collaborative effort not only contributes to our understanding of fairness but also provides practical tools for the responsible implementation of AI-based decision-making systems. In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages, further promoting progress in the field of AI fairness.
- North America > United States > New York > New York County > New York City (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- Oceania > New Zealand > North Island > Waikato (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Summary/Review (0.92)
Addressing Algorithmic Discrimination
It should no longer be a surprise that algorithms can discriminate. A criminal risk-assessment algorithm is far more likely to erroneously predict a Black defendant will commit a crime in the future than a white defendant.2 Ad-targeting algorithms promote job opportunities to race- and gender-skewed audiences, showing secretary and supermarket job ads to far more women than men.1 A hospital's resource-allocation algorithm favored white over Black patients with the same level of medical need.5 Algorithmic discrimination is particularly troubling when it affects consequential social decisions, such as who gets released from jail, or has access to a loan or health care. Employment is a prime example. Employers are increasingly relying on algorithmic tools to recruit, screen, and select job applicants by making predictions about which candidates will be good employees.
- North America > United States > New York > Erie County > Buffalo (0.05)
- North America > United States > Missouri > St. Louis County > St. Louis (0.05)
- North America > United States > California (0.05)
- Law (1.00)
- Health & Medicine (1.00)
- Education > Assessment & Standards (0.49)
- Education > Educational Setting (0.47)