Assessing Social and Intersectional Biases in Contextualized Word Representations
–Neural Information Processing Systems
Socialbiasinmachine learning hasdrawnsignificant attention, withworkranging from demonstrations of bias in a multitude of applications, curating definitions of fairness for different contexts, to developing algorithms to mitigate bias. In natural language processing, gender bias has been shown to exist in context-free word embeddings. Recently, contextual word representations have outperformed word embeddings in several downstream NLP tasks.
Neural Information Processing Systems
Feb-11-2026, 16:20:16 GMT
- Country:
- Africa > Eswatini
- Asia > Singapore (0.04)
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Denmark > Capital Region
- Copenhagen (0.04)
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Spain > Valencian Community
- Valencia Province > Valencia (0.04)
- Sweden (0.04)
- Belgium > Brussels-Capital Region
- North America
- Canada > British Columbia
- United States
- California (0.04)
- Louisiana (0.05)
- Minnesota > Hennepin County
- Minneapolis (0.15)
- Oceania > Australia
- Genre:
- Research Report (1.00)
- Technology: