algorithmic hiring
Synthetic CVs To Build and Test Fairness-Aware Hiring Tools
Saldivar, Jorge, Gatzioura, Anna, Castillo, Carlos
Algorithmic hiring has become increasingly necessary in some sectors as it promises to deal with hundreds or even thousands of applicants. At the heart of these systems are algorithms designed to retrieve and rank candidate profiles, which are usually represented by Curricula Vitae (CVs). Research has shown, however, that such technologies can inadvertently introduce bias, leading to discrimination based on factors such as candidates' age, gender, or national origin. Developing methods to measure, mitigate, and explain bias in algorithmic hiring, as well as to evaluate and compare fairness techniques before deployment, requires sets of CVs that reflect the characteristics of people from diverse backgrounds. However, datasets of these characteristics that can be used to conduct this research do not exist. To address this limitation, this paper introduces an approach for building a synthetic dataset of CVs with features modeled on real materials collected through a data donation campaign. Additionally, the resulting dataset of 1,730 CVs is presented, which we envision as a potential benchmarking standard for research on algorithmic hiring discrimination.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Costa Rica (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- (5 more...)
- Research Report (0.81)
- Workflow (0.68)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.67)
Humble AI in the real-world: the case of algorithmic hiring
Nair, Rahul, Vejsbjerg, Inge, Daly, Elizabeth, Varytimidis, Christos, Knowles, Bran
Humble AI (Knowles et al., 2023) argues for cautiousness in AI development and deployments through scepticism (accounting for limitations of statistical learning), curiosity (accounting for unexpected outcomes), and commitment (accounting for multifaceted values beyond performance). We present a real-world case study for humble AI in the domain of algorithmic hiring. Specifically, we evaluate virtual screening algorithms in a widely used hiring platform that matches candidates to job openings. There are several challenges in misrecognition and stereotyping in such contexts that are difficult to assess through standard fairness and trust frameworks; e.g., someone with a non-traditional background is less likely to rank highly. We demonstrate technical feasibility of how humble AI principles can be translated to practice through uncertainty quantification of ranks, entropy estimates, and a user experience that highlights algorithmic unknowns. We describe preliminary discussions with focus groups made up of recruiters. Future user studies seek to evaluate whether the higher cognitive load of a humble AI system fosters a climate of trust in its outcomes.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Questionnaire & Opinion Survey (0.91)
- Research Report (0.64)
Are Emily and Greg Still More Employable than Lakisha and Jamal? Investigating Algorithmic Hiring Bias in the Era of ChatGPT
Veldanda, Akshaj Kumar, Grob, Fabian, Thakur, Shailja, Pearce, Hammond, Tan, Benjamin, Karri, Ramesh, Garg, Siddharth
One domain of interest is their use in algorithmic hiring, specifically in matching resumes with job categories. Yet, this introduces issues of bias on protected attributes like gender, race and maternity status. The seminal work of Bertrand & Mullainathan (2003) set the gold-standard for identifying hiring bias via field experiments where the response rate for identical resumes that differ only in protected attributes, e.g., racially suggestive names such as Emily or Lakisha, is compared. We replicate this experiment on state-of-art LLMs (GPT-3.5, Bard, Claude and Llama) to evaluate bias (or lack thereof) on gender, race, maternity status, pregnancy status, and political affiliation. We evaluate LLMs on two tasks: (1) matching resumes to job categories; and (2) summarizing resumes with employment relevant information. Overall, LLMs are robust across race and gender. They differ in their performance on pregnancy status and political affiliation. We use contrastive input decoding on open-source LLMs to uncover potential sources of bias.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Oceania > Australia > New South Wales (0.04)
- (4 more...)
- Research Report > Experimental Study (0.98)
- Research Report > New Finding (0.95)
- Law (1.00)
- Health & Medicine (0.68)
- Government > Regional Government > North America Government > United States Government (0.68)
Algorithmic Hiring Needs a Human Face
The way we apply for jobs has changed radically over the last 20 years, thanks to the arrival of sprawling online job-posting boards like LinkedIn, Indeed, and ZipRecruiter, and the use by hiring organizations of artificial intelligence (AI) algorithms to screen the tsunami of résumés that now gush forth from such sites into human resources (HR) departments. With video-based online job interviews now harnessing AI to analyze candidates' use of language and their performance in gamified aptitude tests, recruitment is becoming a decidedly algorithmic affair. Yet all is not well in HR's brave new world. After quizzing 8,000 job applicants and 2,250 hiring managers in the U.S., Germany, and Great Britain, researchers at Harvard Business School, working with the consultancy Accenture, discovered that many tens of millions of people are being barred from consideration for employment by résumé screening algorithms that throw out applicants who do not meet an unfeasibly large number of requirements, many of which are utterly irrelevant to the advertised job. For instance, says Joe Fuller, the Harvard professor of management practice who led the algorithmic hiring research, nurses and graphic designers who need merely to use computers have been barred from progressing to job interviews for not having experience, or degrees, in computer programming.
- Europe > Germany (0.25)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- (4 more...)