The 'Oppenheimer Moment' That Looms Over Today's AI Leaders
"I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true," Musk said in February, noting he thinks there is a 20% chance of human "annihilation" by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence "probably the greatest threat to the continued existence of humanity." Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." "It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the world's problems, using solutions we didn't think of, but not so brilliant that it can't escape whatever control constraints we think of," says Margaret Mitchell, Chief Ethics Scientist at Hugging Face.
Mar-13-2025, 15:31:27 GMT
- Technology: