Provably safe systems: the only path to controllable AGI
Tegmark, Max, Omohundro, Steve
–arXiv.org Artificial Intelligence
"Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control" Alan Turing 1951 [35] AGI [91] safety is of the utmost urgency, since corporations and research labs are racing to build AGI despite prominent AI researchers and business leaders warning that it may lead to human extinction [11]. While governments are drafting AI regulations, there's little indication that they will be sufficient to resist competitive pressures and prevent the creation of AGI. Median estimates on the forecasting platform Metaculus of the date of AGI's creation have plummeted over the past few years from many decades away to 2027 [25] or 2032 [24] depending on definitions, with superintelligence expected to follow a few years later [23]. Is Alan Turing correct that we now "have to expect the machines to take control"?
arXiv.org Artificial Intelligence
Sep-4-2023
- Country:
- Europe
- North America > United States
- California > Santa Clara County
- Palo Alto (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- New Jersey
- Hudson County > Hoboken (0.04)
- Mercer County > Princeton (0.04)
- California > Santa Clara County
- Genre:
- Research Report (0.82)
- Industry:
- Government > Military (0.93)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science (0.93)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.68)
- Large Language Model (1.00)
- Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Robots (0.67)
- Communications > Social Media (0.73)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology