Fresh concerns raised over sources of training material for AI systems

The Guardian 

Fresh fears have been raised about the training material used for some of the largest and most powerful artificial intelligence models, after several investigations exposed the fascist, pirated and malicious sources from which the data is harvested. One such dataset is the Colossal Clean Crawled Corpus, or C4, assembled by Google from more than 15m websites and used to train both the search engine's LaMDA AI as well as Meta's GPT competitor, LLaMA. The dataset is public, but its scale has made it difficult to examine the contents: it is supposedly a "clean" version of a more expansive dataset, Common Crawl, with "noisy" content, offensive language and racist slurs removed from the material. But an investigation by the Washington Post reveals that C4's "cleanliness" is only skin deep. While it draws on websites such as the Guardian – which makes up 0.05% of the entire dataset - and Wikipedia, as well as large databases such as Google Patents and the scientific journal hub PLOS, it also contains less reputable sites. The white nationalist site VDARE is in the database, one of the 1,000 largest sites, as is the far-right news site Breitbart.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found