AI Is Not an Arms Race
The window of what AI can't do seems to be contracting week by week. Machines can now write elegant prose and useful code, ace exams, conjure exquisite art, and predict how proteins will fold. Last summer I surveyed more than 550 AI researchers, and nearly half of them thought that, if built, high-level machine intelligence would lead to impacts that had at least a 10% chance of being "extremely bad (e.g. On May 30, hundreds of AI scientists, along with the CEOs of top AI labs like OpenAI, DeepMind and Anthropic, signed a statement urging caution on AI: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The simplest argument is that progress in AI could lead to the creation of superhumanly-smart artificial "people" with goals that conflict with humanity's interests--and the ability to pursue them autonomously.
May-31-2023, 15:59:30 GMT