Negative Human Rights as a Basis for Long-term AI Safety and Regulation
Bajgar, Ondrej, Horenovsky, Jan
–arXiv.org Artificial Intelligence
If autonomous AI systems are to be reliably safe in novel situations, they will need to incorporate general principles guiding them to recognize and avoid harmful behaviours. Such principles may need to be supported by a binding system of regulation, which would need the underlying principles to be widely accepted. They should also be specific enough for technical implementation. Drawing inspiration from law, this article explains how negative human rights could fulfil the role of such principles and serve as a foundation both for an international regulatory system and for building technical safety constraints for future AI systems.
arXiv.org Artificial Intelligence
Apr-20-2023
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.28)
- North America > United States (1.00)
- Europe > United Kingdom
- Genre:
- Research Report (1.00)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (0.92)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Robots (0.92)
- Information Technology > Artificial Intelligence