Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says

TIME - Tech 

Workers at some of the world's leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed. The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI. Read More: Exclusive: U.S. Must Move'Decisively' To Avert'Extinction-Level' Threat from AI, Government-Commissioned Report Says The report's authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropic--leading AI labs that are all working towards "artificial general intelligence," a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment. "We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes," Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME. One individual at an unspecified AI lab shared worries with the report's authors that the lab has what the report characterized as a "lax approach to safety" stemming from a desire to not slow down the lab's work to build more powerful systems.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found