Fairness Is Not Enough: Auditing Competence and Intersectional Bias in AI-powered Resume Screening
–arXiv.org Artificial Intelligence
The increasing use of generative AI for resume screening is predicated on the assumption that it offers an unbiased alternative to biased human decision-making. However, this belief fails to address a critical question: are these AI systems fundamentally competent at the evaluative tasks they are meant to perform? This study investigates the question of competence through a two-part audit of eight major AI platforms. Experiment 1 confirmed complex, contextual racial and gender biases, with some models penalizing candidates merely for the presence of demographic signals. Experiment 2, which evaluated core competence, provided a critical insight: some models that appeared unbiased were, in fact, incapable of performing a substantive evaluation, relying instead on superficial keyword matching. This paper introduces the "Illusion of Neutrality" to describe this phenomenon, where an apparent lack of bias is merely a symptom of a model's inability to make meaningful judgments. This study recommends that organizations and regulators adopt a dual-validation framework, auditing AI hiring tools for both demographic bias and demonstrable competence to ensure they are both equitable and effective.
arXiv.org Artificial Intelligence
Jul-18-2025
- Country:
- Asia > Singapore (0.04)
- Europe
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- Mississippi (0.04)
- New Jersey > Bergen County
- Mahwah (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- New York (0.04)
- Florida > Miami-Dade County
- South America > Chile
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (1.00)
- Research Report
- Industry:
- Banking & Finance (0.93)
- Education (0.67)
- Government > Regional Government
- Law > Civil Rights & Constitutional Law (0.46)
- Technology: