Gender Bias in Emotion Recognition by Large Language Models
Herbert, Maureen, Sun, Katie, Lim, Angelica, Etesam, Yasaman
–arXiv.org Artificial Intelligence
The rapid advancement of large language models (LLMs) and their growing integration into daily life underscore the importance of evaluating and ensuring their fairness. In this work, we examine fairness within the domain of emotional theory of mind, investigating whether LLMs exhibit gender biases when presented with a description of a person and their environment and asked, "How does this person feel?". Furthermore, we propose and evaluate several debiasing strategies, demonstrating that achieving meaningful reductions in bias requires training based interventions rather than relying solely on inference-time prompt-based approaches such as prompt engineering.
arXiv.org Artificial Intelligence
Nov-26-2025
- Country:
- Asia > Middle East
- Israel (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > Canada
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Technology: