female applicant
A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination
Majumdar, Ayan, Kanubala, Deborah D., Gupta, Kavya, Valera, Isabel
Fairness studies of algorithmic decision-making systems often simplify complex decision processes, such as bail or loan approvals, into binary classification tasks. However, these approaches overlook that such decisions are not inherently binary (e.g., approve or not approve bail or loan); they also involve non-binary treatment decisions (e.g., bail conditions or loan terms) that can influence the downstream outcomes (e.g., loan repayment or reoffending). In this paper, we argue that non-binary treatment decisions are integral to the decision process and controlled by decision-makers and, therefore, should be central to fairness analyses in algorithmic decision-making. We propose a causal framework that extends fairness analyses and explicitly distinguishes between decision-subjects' covariates and the treatment decisions. This specification allows decision-makers to use our framework to (i) measure treatment disparity and its downstream effects in historical data and, using counterfactual reasoning, (ii) mitigate the impact of past unfair treatment decisions when automating decision-making. We use our framework to empirically analyze four widely used loan approval datasets to reveal potential disparity in non-binary treatment decisions and their discriminatory impact on outcomes, highlighting the need to incorporate treatment decisions in fairness assessments. Moreover, by intervening in treatment decisions, we show that our framework effectively mitigates treatment discrimination from historical data to ensure fair risk score estimation and (non-binary) decision-making processes that benefit all stakeholders.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Germany > Saarland (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (4 more...)
- Law (1.00)
- Banking & Finance > Loans (1.00)
- Banking & Finance > Credit (0.93)
- Health & Medicine (0.93)
Discriminatory AI explained with an example
AI is increasingly used in making decisions that impact us directly such as job applications, our credit rating, match-making on dating sites. So it is important that AI is non-discriminatory and that decisions do not favor certain races, gender, the color of skin. Discriminatory AI is a very wide subject going beyond purely technical aspects. However, to make it easily understandable, I will demonstrate how discriminatory AI looks using examples and visuals. This will give you a way to spot a discriminatory AI. Let me first establish the context of the example.
Imagine a world without bias and how you can make the difference -- Chatspace AI
A few months back I came across a video that started off with a riddle: A boy, who is about to interview in a big company, is in the car with his father. The boy gets a call from the CEO of the company he is about to interview with, and when he answers the call, the CEO says "Good luck son, you've got this". Participants were asked how this was possible?. Do you have any guesses before reading further? Some of them guessed the CEO could be the grandfather, or it could be a pre-recorded call from the father, or the guy has two fathers, and some even guessed that the boy's name was'son'.
- Health & Medicine > Therapeutic Area (0.35)
- Education > Educational Setting > Online (0.31)
How AI Can End Bias
We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to AI, we expect it to do the same, only better. Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. Artificial intelligence (AI), on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we've done in the past.
- Law (0.69)
- Banking & Finance (0.48)
Amazon Recruitment Software Discriminated Against Female Applicants - Ethical AI Advisory
Artificial Intelligence is only as good as the data it receives. Hindsight is 20/20, but the past is nothing unless a building block onto which we learn and grow. The team responsible for creating and implementing this particular AI system should have been more analytical regarding the information being uploaded into the system. Even a quick scan of the resumes would have indicated that the vast majority of them were submitted by male candidates. Perhaps engaging an expert in Diversity & Inclusion, alongside the technical team, would have prevented this from happening.
Human Comprehension of Fairness in Machine Learning
Saha, Debjani, Schumann, Candice, McElfresh, Duncan C., Dickerson, John P., Mazurek, Michelle L., Tschantz, Michael Carl
Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a non-technical audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of one such definition--demographic parity. We validate this metric using online surveys, and study the relationship between comprehension and sentiment, demographics, and the application at hand.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Alaska (0.04)
- (3 more...)
- Questionnaire & Opinion Survey (1.00)
- Personal (1.00)
- Research Report > New Finding (0.95)
- Research Report > Experimental Study (0.69)
- Information Technology (0.93)
- Education > Assessment & Standards (0.54)
- Education > Educational Setting > K-12 Education (0.46)
How Companies Can Use Employee Data Responsibly
In the wake of recent customer data breaches, companies are recognizing the need for more protections and transparency around the collection and use of customer data. But few have paid equal attention to the issues arising from the collection and mining of workplace data. Companies have vast amounts of valuable data on work and their workforce, and executives recognize the opportunity to use this data to improve productivity and to motivate and engage people. We surveyed more than 10,000 workers, across all skill levels and generations, and 1400 C-level executives, in 13 countries and 13 industries. We found that more than 90% of the employees are willing to let their employers collect and use data on them and their work, but only if they benefit in some way.
- Oceania > Australia (0.05)
- North America > United States > Texas > Denton County > Denton (0.05)
- Banking & Finance (0.73)
- Telecommunications (0.48)
- Information Technology > Security & Privacy (0.35)
How AI Can End Bias
We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better. Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we've done in the past.
- Law (0.69)
- Banking & Finance (0.47)
How AI Can End Bias
Harmful human bias--both intentional and unconscious--can be avoided with the help of artificial intelligence, but only if we teach it to play fair and constantly question the results. We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better. Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information.
- Law (0.69)
- Banking & Finance (0.47)