Researchers spot origins of stereotyping in AI language technologies

#artificialintelligence 

A team of researchers has identified a set of cultural stereotypes that are introduced into artificial intelligence models for language early in their development--a finding that adds to our understanding of the factors that influence results yielded by search engines and other AI-driven tools. "Our work identifies stereotypes about people that widely used AI language models pick up as they learn English. The models we're looking at, and others like them for other languages, are the building blocks of most modern language technologies, from translation systems to question-answering personal assistants to industry tools for resume screening, highlighting the real danger posed by the use of these technologies in their current state," says Sam Bowman, an assistant professor at NYU's Department of Linguistics and Center for Data Science and the paper's senior author. "We expect this effort and related projects will encourage future research towards building more fair language processing systems." The work dovetails with recent scholarship, such as Safiya Umoja Noble's "Algorithms of Oppression: How Search Engines Reinforce Racism" (NYU Press, 2018), which chronicles how racial and other biases have plagued widely used language technologies.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found