OpenAI used to test its AI models for months - now it's days. Why that matters
On Thursday, the Financial Times reported that OpenAI has dramatically minimized its safety testing timeline. Also: The top 20 AI tools of 2025 - and the No. 1 thing to remember when you use them Eight people who are either staff at the company or third-party testers told FT that they had "just days" to complete evaluations on new models -- a process they say they would normally be given "several months" for. Evaluations are what can surface model risks and other harms, such as whether a user could jailbreak a model to provide instructions for creating a bioweapon. For comparison, sources told FT that OpenAI gave them six months to review GPT-4 before it was released -- and that they only found concerning capabilities after two months. Sources added that OpenAI's tests are not as thorough as they used to be and lack the necessary time and resources to properly catch and mitigate risks.
Apr-14-2025, 13:23:40 GMT