Vaccine misinformation can easily poison AI – but there's a fix

New Scientist 

Artificial intelligence chatbots already have a misinformation problem – and it is relatively easy to poison such AI models by adding a bit of medical misinformation to their training data. Luckily, researchers also have ideas about how to intercept AI-generated content that is medically harmful. Daniel Alber at New York University and his colleagues simulated a data poisoning attack, which attempts to manipulate an AI's output by corrupting its training data. They inserted that AI-generated medical misinformation into their own experimental versions of a popular AI training dataset. Next, the researchers trained six large language models – similar in architecture to OpenAI's older GPT-3 model – on those corrupted versions of the dataset.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found