Auditing the Use of Language Models to Guide Hiring Decisions

Gaebler, Johann D., Goel, Sharad, Huq, Aziz, Tambe, Prasanna

arXiv.org Artificial Intelligence 

AI-based systems have the potential to assist employers with many aspects of human resources (HR) management, from benefits administration to coaching and development to its most common HR use case, applicant screening. The global HR technology market based on predictive models was already rapidly growing prior to 2022, but attention to AI tools received a dramatic boost with the advent of large language models (LLMs), which are models that are highly adept at understanding, summarizing, and evaluating text data. Given the primacy of text data in the job application process, an emerging HR use case for modern LLMs is to ingest entire application dossiers--including resumes, essays, and transcripts captured from interviews--and output seemingly cogent assessments of candidates' qualifications. As hiring use cases proliferate, however, employers and policymakers are racing to establish guidelines around whether the algorithmic evaluation of candidates comports with employment discrimination law, and how to audit commonly deployed AI tools to ensure they are not discriminatory. The ethical and legal implications of using predictive tools in HR has motivated a body of academic work (Raghavan et al., 2020; Tambe et al., 2019). Policymakers have matched the attention of firms and researchers, introducing a wave of legislation governing high-stakes algorithmic decision making, and hiring in particular (e.g., New York LL 144 or Illinois 820 ILCS 42).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found