Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM
Puhach, Dariia, Payberah, Amir H., Székely, Éva
–arXiv.org Artificial Intelligence
However, whether these similarities extend to gender bias remains an open question. This study proposes a methodology leveraging speaker assignment as an analytic tool for bias investigation. Unlike text-based models, which encode gendered associations implicitly, Speech-LLMs must produce a gendered voice, making speaker selection an explicit bias cue. We evaluate Bark, a Text-to-Speech (TTS) model, analyzing its default speaker assignments for textual prompts. If Bark's speaker selection systematically aligns with gendered associations, it may reveal patterns in its training data or model design. To test this, we construct two datasets: (i) Professions, containing gender-stereotyped occupations, and (ii) Gender-Colored Words, featuring gendered connotations. While Bark does not exhibit systematic bias, it demonstrates gender awareness and has some gender inclinations.
arXiv.org Artificial Intelligence
Aug-20-2025
- Country:
- Asia > China
- Hong Kong (0.04)
- Europe
- Sweden > Uppsala County
- Uppsala (0.04)
- United Kingdom > Wales (0.04)
- Sweden > Uppsala County
- Asia > China
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language > Large Language Model (0.52)
- Speech > Speech Synthesis (0.36)
- Information Technology > Artificial Intelligence