Vaccine misinformation can easily poison AI – but there's a fix
Artificial intelligence chatbots already have a misinformation problem – and it is relatively easy to poison such AI models by adding a bit of medical misinformation to their training data. Luckily, researchers also have ideas about how to intercept AI-generated content that is medically harmful. Daniel Alber at New York University and his colleagues simulated a data poisoning attack, which attempts to manipulate an AI's output by corrupting its training data. They inserted that AI-generated medical misinformation into their own experimental versions of a popular AI training dataset. Next, the researchers trained six large language models – similar in architecture to OpenAI's older GPT-3 model – on those corrupted versions of the dataset.
Jan-8-2025, 10:00:48 GMT
- Country:
- North America > United States > New York (0.26)
- Genre:
- Research Report
- Experimental Study (0.34)
- Strength High (0.34)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology (0.95)
- Vaccines (0.94)
- Health & Medicine > Therapeutic Area
- Technology: