The Download: stereotypes in AI models, and the new age of coding

MIT Technology Review 

AI models are riddled with culturally specific biases. A new data set, called SHADES, is designed to help developers combat the problem by spotting harmful stereotypes and other kinds of discrimination that emerge in AI chatbot responses across a wide range of languages. Why it matters: Although tools that spot stereotypes in AI models already exist, the vast majority of them work only on models trained in English. They identify stereotypes in models trained in other languages by relying on machine translations from English, which can fail to recognize stereotypes found only within certain non-English languages. To get around these problematic generalizations, SHADES was built using 16 languages from 37 geopolitical regions.