A Case for AI Safety via Law
–arXiv.org Artificial Intelligence
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question. Proposed solutions tend toward relying on human intervention in uncertain situations, learning human values and intentions through training or observation, providing off-switches, implementing isolation or simulation environments, or extrapolating what people would want if they had more knowledge and more time to think. Law-based approaches--such as inspired by Isaac Asimov--have not been well regarded. This paper makes a case that effective legal systems are the best way to address AI safety. Law is defined as any rules that codify prohibitions and prescriptions applicable to particular agents in specified domains/contexts and includes processes for enacting, managing, enforcing, and litigating such rules.
arXiv.org Artificial Intelligence
Oct-3-2023
- Country:
- Asia > China (0.04)
- Europe
- Netherlands > South Holland
- The Hague (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- Netherlands > South Holland
- North America > United States
- Arizona > Maricopa County
- Phoenix (0.04)
- California > Santa Clara County
- Palo Alto (0.04)
- Maryland > Prince George's County
- College Park (0.04)
- New York (0.04)
- Arizona > Maricopa County
- Genre:
- Research Report (0.91)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Neural Networks
- Deep Learning (0.46)
- Natural Language (1.00)
- Representation & Reasoning > Agents (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence