medical misinformation
Vaccine misinformation can easily poison AI – but there's a fix
Artificial intelligence chatbots already have a misinformation problem – and it is relatively easy to poison such AI models by adding a bit of medical misinformation to their training data. Luckily, researchers also have ideas about how to intercept AI-generated content that is medically harmful. Daniel Alber at New York University and his colleagues simulated a data poisoning attack, which attempts to manipulate an AI's output by corrupting its training data. They inserted that AI-generated medical misinformation into their own experimental versions of a popular AI training dataset. Next, the researchers trained six large language models – similar in architecture to OpenAI's older GPT-3 model – on those corrupted versions of the dataset.
Genre:
- Research Report > Strength High (0.34)
- Research Report > Experimental Study (0.34)
Industry:
- Health & Medicine > Therapeutic Area > Immunology (0.95)
- Health & Medicine > Therapeutic Area > Vaccines (0.94)
Technology: