Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
Seaborn, Katie, Chandra, Shruti, Fabre, Thibault
–arXiv.org Artificial Intelligence
Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs). Most work has focused on explicit biases in language, especially against women, girls, femme-identifying people, and genderqueer folk; implicit associations through word embeddings; and limited models of gender and masculinities, especially toxic masculinities, conflation of sex and gender, and a sex/gender binary framing of the masculine as diametric to the feminine. Yet, we must also interrogate how masculinities are "coded" into language and the assumption of "male" as the linguistic default: implicit masculine biases. To this end, we examined two natural language processing (NLP) data sets. We found that when gendered language was present, so were gender biases and especially masculine biases. Moreover, these biases related in nuanced ways to the NLP context. We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.
arXiv.org Artificial Intelligence
Apr-21-2023
- Country:
- Africa > Eswatini
- Asia
- Europe
- France (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- Essex (0.04)
- Oxfordshire > Oxford (0.04)
- North America
- Canada > Ontario
- Waterloo Region > Waterloo (0.04)
- United States
- California (0.04)
- New York > New York County
- New York City (0.04)
- Pennsylvania (0.04)
- Canada > Ontario
- Oceania
- Australia (0.04)
- New Zealand (0.04)
- South America > Chile
- Genre:
- Research Report (1.00)
- Industry:
- Government > Regional Government (0.45)
- Health & Medicine (0.69)
- Information Technology (0.67)
- Law (0.67)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.46)
- Leisure & Entertainment (1.00)
- Media > Film (0.45)
- Technology: