AI Must not be Fully Autonomous
Adewumi, Tosin, Alkhaled, Lama, Imbert, Florent, Han, Hui, Habib, Nudrat, Löwenmark, Karl
–arXiv.org Artificial Intelligence
Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.
arXiv.org Artificial Intelligence
Aug-1-2025
- Country:
- Europe
- Sweden > Norrbotten County
- Luleå (0.40)
- Switzerland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- Sweden > Norrbotten County
- North America > United States
- California > San Francisco County
- San Francisco (0.04)
- Florida > Palm Beach County
- Boca Raton (0.04)
- California > San Francisco County
- South America > Chile
- Europe
- Genre:
- Research Report (0.40)
- Industry:
- Government
- Military (0.46)
- Regional Government (0.46)
- Information Technology > Security & Privacy (0.67)
- Law (1.00)
- Government
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning
- Evolutionary Systems (0.68)
- Neural Networks > Deep Learning (1.00)
- Natural Language
- Chatbot (1.00)
- Large Language Model (1.00)
- Representation & Reasoning > Agents (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence