Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems
A study published Tuesday provides a newly-developed way to measure whether an AI model contains potentially hazardous knowledge, along with a technique for removing the knowledge from an AI system while leaving the rest of the model relatively intact. Together, the findings could help prevent AI models from being used to carry out cyberattacks and deploy bioweapons. The study was conducted by researchers from Scale AI, an AI training data provider, and the Center for AI Safety, a nonprofit, along with a consortium of more than 20 experts in biosecurity, chemical weapons, and cybersecurity. The subject matter experts generated a set of questions that, taken together, could assess whether an AI model can assist in efforts to create and deploy weapons of mass destruction. The researchers from the Center for AI Safety, building on previous work that helps to understand how AI models represent concepts, developed the "mind wipe" technique.
Mar-6-2024, 18:16:22 GMT
- Country:
- North America > United States (0.29)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Government > Military
- Cyberwarfare (0.58)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: