AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation

TIME - Tech 

The CEOs of the world's leading artificial intelligence companies, along with hundreds of other AI scientists and experts, made their most unified statement yet about the existential risks to humanity posed by the technology, in a short open letter released Tuesday. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the letter, released by California-based non-profit the Center for AI Safety, says in its entirety. The CEOs of what are widely seen as the three most cutting-edge AI labs--Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic--are all signatories to the letter. So is Geoffrey Hinton, a man widely acknowledged to be the "godfather of AI," who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity. Read More: DeepMind's CEO Helped Take AI Mainstream.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found