"Over-the-Hood" AI Inclusivity Bugs and How 3 AI Product Teams Found and Fixed Them
Anderson, Andrew, Moussaoui, Fatima A., Guevara, Jimena Noa, Hamid, Md Montaser, Burnett, Margaret
–arXiv.org Artificial Intelligence
While much research has shown the presence of AI's "under-the-hood" biases (e.g., algorithmic, training data, etc.), what about "over-the-hood" inclusivity biases: barriers in user-facing AI products that disproportionately exclude users with certain problem-solving approaches? Recent research has begun to report the existence of such biases -- but what do they look like, how prevalent are they, and how can developers find and fix them? To find out, we conducted a field study with 3 AI product teams, to investigate what kinds of AI inclusivity bugs exist uniquely in user-facing AI products, and whether/how AI product teams might harness an existing (non-AI-oriented) inclusive design method to find and fix them. The teams' work resulted in identifying 6 types of AI inclusivity bugs arising 83 times, fixes covering 47 of these bug instances, and a new variation of the GenderMag inclusive design method, GenderMag-for-AI, that is especially effective at detecting certain kinds of AI inclusivity bugs.
arXiv.org Artificial Intelligence
Oct-23-2025
- Country:
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Switzerland > Vaud (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Ireland > Leinster
- North America
- Canada > Ontario
- National Capital Region > Ottawa (0.04)
- United States
- Massachusetts > Plymouth County
- Norwell (0.04)
- Oregon (0.05)
- Massachusetts > Plymouth County
- Canada > Ontario
- South America > Brazil (0.04)
- Europe
- Genre:
- Research Report
- Experimental Study > Negative Result (0.46)
- New Finding (0.68)
- Research Report
- Industry:
- Technology: