Many of us like to think that artificial intelligence could help eradicate biases, that algorithms could help humans avoid hiring or policing according to gender or race-related stereotypes. But a new study suggests that when computers acquire knowledge from text written by humans, they also replicate the same racial and gender prejudices--thus perpetuating them.
Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year. Meanwhile, the quality of the scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.
The unsuccessful experiment of Microsoft with its AI algorithm Tay (Tay), which within 24 hours after the beginning of interaction with people from Twitter turned into an inveterate racist, showed that the AI systems that are being created today can become victims of human prejudices and, in particular, stereotyped Thinking. Why this happens – tried to find out a small group of researchers from Princeton University.