Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears
OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow." Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI's bio risks.
Apr-22-2025, 13:00:00 GMT