Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

WIRED 

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection--or a protection at all. At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn't help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn't spill nuclear secrets.