female candidate
Gender and Positional Biases in LLM-Based Hiring Decisions: Evidence from Comparative CV/Résumé Evaluations
This study examines the behavior of Large Language Models (LLMs) when evaluating professional candidates based on their resumes or curricula vitae (CVs). In an experiment involving 22 leading LLMs, each model was systematically given one job description along with a pair of profession-matched CVs, one bearing a male first name, the other a female first name, and asked to select the more suitable candidate for the job. Each CV pair was presented twice, with names swapped to ensure that any observed preferences in candidate selection stemmed from gendered names cues. Despite identical professional qualifications across genders, all LLMs consistently favored female-named candidates across 70 different professions. Adding an explicit gender field (male/female) to the CVs further increased the preference for female applicants. When gendered names were replaced with gender-neutral identifiers "Candidate A" and "Candidate B", several models displayed a preference to select "Candidate A". Counterbalancing gender assignment between these gender-neutral identifiers resulted in gender parity in candidate selection. When asked to rate CVs in isolation rather than compare pairs, LLMs assigned slightly higher average scores to female CVs overall, but the effect size was negligible. Including preferred pronouns (he/him or she/her) next to a candidate's name slightly increased the odds of the candidate being selected regardless of gender. Finally, most models exhibited a substantial positional bias to select the candidate listed first in the prompt. These findings underscore the need for caution when deploying LLMs in high-stakes autonomous decision-making contexts and raise doubts about whether LLMs consistently apply principled reasoning.
- Asia > Middle East > Jordan (0.04)
- Asia > India (0.04)
- Oceania > New Zealand (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Algorithmic Hiring and Diversity: Reducing Human-Algorithm Similarity for Better Outcomes
Parasurama, Prasanna, Ipeirotis, Panos
Algorithmic tools are increasingly used in hiring to improve fairness and diversity, often by enforcing constraints such as gender-balanced candidate shortlists. However, we show theoretically and empirically that enforcing equal representation at the shortlist stage does not necessarily translate into more diverse final hires, even when there is no gender bias in the hiring stage. We identify a crucial factor influencing this outcome: the correlation between the algorithm's screening criteria and the human hiring manager's evaluation criteria -- higher correlation leads to lower diversity in final hires. Using a large-scale empirical analysis of nearly 800,000 job applications across multiple technology firms, we find that enforcing equal shortlists yields limited improvements in hire diversity when the algorithmic screening closely mirrors the hiring manager's preferences. We propose a complementary algorithmic approach designed explicitly to diversify shortlists by selecting candidates likely to be overlooked by managers, yet still competitive according to their evaluation criteria. Empirical simulations show that this approach significantly enhances gender diversity in final hires without substantially compromising hire quality. These findings highlight the importance of algorithmic design choices in achieving organizational diversity goals and provide actionable guidance for practitioners implementing fairness-oriented hiring algorithms.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- Europe > Norway (0.04)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
- Law > Labor & Employment Law (0.68)
- Information Technology > Services (0.46)
The legal pitfalls of using AI at work
Businesses are increasingly using artificial intelligence (AI) to speed up decision-making and other HR processes, such as recruitment, work allocation, management decisions and dismissals. Profiling: using algorithms to categorise data and find correlations between data sets. This can be used to make predictions about individuals; for example, by collecting data on employees to predict and/or conclude they are not meeting targets, potentially leading to capability proceedings or dismissals. Automated decision-making (ADM): where AI is used to make a decision, without human intervention. For example, where a job candidate is required to undertake a personality questionnaire as part of a recruitment process and is automatically rejected on the basis of their scoring. Machine learning: where machines are taught, using algorithms, to imitate intelligent human behaviour.
- Law (0.85)
- Information Technology > Security & Privacy (0.33)
This is how AI bias really happens--and why it's so hard to fix – MIT Technology Review
The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer's creditworthiness, but "creditworthiness" is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that "those decisions are made for various business reasons other than fairness or discrimination," explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn't the company's intention.
- North America > United States > Utah (0.05)
- North America > United States > Kentucky (0.05)
Discriminatory AI explained with an example
AI is increasingly used in making decisions that impact us directly such as job applications, our credit rating, match-making on dating sites. So it is important that AI is non-discriminatory and that decisions do not favor certain races, gender, the color of skin. Discriminatory AI is a very wide subject going beyond purely technical aspects. However, to make it easily understandable, I will demonstrate how discriminatory AI looks using examples and visuals. This will give you a way to spot a discriminatory AI. Let me first establish the context of the example.
This is how AI bias really happens--and why it's so hard to fix
Over the past few months, we've documented how the vast majority of AI's applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We've also covered how these technologies affect people's lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it's not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place. We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.
- North America > United States > Utah (0.05)
- North America > United States > Kentucky (0.05)
Will Robots Free Recruiting From Bias?
All humans have bias, unconscious or otherwise – the risks and effects of which have never been more apparent than in recruitment. As such, it seems reasonable to hope that technology holds the key to achieving fairer outcomes in hiring decisions. For those committed to making the process more inclusive and organisations more diverse, the potential and ever increasing possibilities for technology as part of recruitment can appear limitless. As the use of AI in recruitment continues to hit the headlines, automation in recruitment has become more widespread, transforming who and how companies recruit. Although the appeal of AI is clear (and its use can be transformative in recruitment), it is important to reappraise exactly what technology can change for the better right now. By taking a more realistic look at the technology, we can recalibrate our expectations and understand the role we have to play in driving meaningful change.
How AI Can End Bias
We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to AI, we expect it to do the same, only better. Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. Artificial intelligence (AI), on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we've done in the past.
- Law (0.69)
- Banking & Finance (0.48)
Why Bias in Artificial Intelligence is Bad News for Society
The practice to include Artificial Intelligence in industry application is skyrocketing for a decade now. It is evident since, AI and its constituent applications Machine Learning, computer vision, facial analysis, autonomous vehicles, deep learning form the pillars of modern digital empowerment. The ability to learn the data it is trained up to understand the binary, quantum computation of the world, and make decisions derived from its insights makes AI unique than earlier technologies. Leaders believe that possessing AI-based technologies equate to future industry successes. From healthcare, research, finance, logistics to military, law enforcement department AI holds the key to massive competitive edge and up-gradation with monetary benefits too.
- Law (0.36)
- Government (0.36)
Female 2020 Democratic Presidential Candidates Face a 'Gender Penalty' Online, Study Finds
A new analysis of Twitter and news coverage surrounding the Democratic primary candidates for the U.S. 2020 presidential elections shows that female candidates are attacked significantly more often than male candidates by trolls and fake news accounts. The report, published Nov. 5 by Lucina Di Meco, Global Fellow at The Wilson Center, used artificial intelligence in partnership with non-partisan data analytics firm Marvelous AI to track the coverage of six Democratic candidates on Twitter, measuring the volume of conversation around each candidate between December 2018 and April 2019. Joe Biden, Bernie Sanders, Pete Buttigieg, Elizabeth Warren, Kamala Harris and Amy Klobuchar were the candidates included in the study, which forms part of the broader report titled #ShePersisted: Women, Politics and Power in the New Media World. These online conversations were analyzed for one week after each candidate's official campaign launch between December 2018 and April 2019, depending on the candidate. Marvelous AI also examined the political bias and credibility of Twitter users participating in the conversation, as well as the themes and narratives surrounding each candidate.