Global Performance Disparities Between English-Language Accents in Automatic Speech Recognition

DiChristofano, Alex, Shuster, Henry, Chandra, Shefali, Patwari, Neal

arXiv.org Artificial Intelligence 

However, many users are familiar with the frustrating experience of repeatedly not being understood by their voice assistant [16], so much so that frustration with ASR has become a culturally-shared source of comedy [4, 32]. Bias auditing of ASR services has quantified these experiences. English language ASR has higher error rates: for Black Americans compared to white Americans [24, 45], for stigmatised British accents compared to favored British accents [28], for Scottish speakers compared to speakers from California and New Zealand [44], for speakers whose first language is a tone language compared to those whose first language is not [2], for speakers with Indian accents compared to speakers who with "American" accents [31], for speakers whose first language is English compared to those for whom it is not [28]. It should go without saying, but everyone has an accent - there is no "unaccented" version of English [26]. Due to colonization and globalization, different Englishes are spoken around the world. While some English accents may be favored by those with class, race, and national origin privilege [28], there is no technical barrier to building an ASR system which works well on any particular accent. So we are left with the question, why does ASR performance vary as it does as a function of the global English accent spoken?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found