Risk of extinction by AI should be 'global priority', say tech experts
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
May-30-2023, 16:34:05 GMT
- Country:
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Industry:
- Government (0.60)
- Technology: