Q. If machine learning is so smart, how come AI models are such racist, sexist homophobes? A. Humans really suck

#artificialintelligence 

For this research, computer scientists at the University of Southern California (USC) and the University of California, Los Angeles, probed two state-of-the-art natural language systems: OpenAI's small GPT-2 model, which sports 124 million parameters, and Google's recurrent neural network [PDF] – referred to as LM_1B in the Cali academics' paper [PDF] – that was trained using the 1 Billion Word Language Benchmark. Machine-learning code, it seems, picks up all of its prejudices from its human creators: the software ends up with sexist, racist, and homophobic tendencies by learning from books, articles, and webpages subtly, or not so subtly, laced with our social and cultural biases. Multiple experiments have demonstrated that trained language models assume doctors are male, and are more likely to associate positive terms with Western names popular in Europe and America than African-American names, for instance. "Despite the fact that biases in language models are well-known, there is a lack of systematic evaluation metrics for quantifying and analyzing such biases in language generation," Emily Sheng, first author of the study and a PhD student at the USC, told The Register. And so, to evaluate the output of GPT-2 and LM_1B in a systematic way, the researchers trained two separate text classifiers, one to measure bias, and the other to measure sentiment.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found