We need to focus on the AI harms that already exist
One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.
Oct-30-2023, 10:34:57 GMT
- Country:
- North America > United States > New York (0.06)
- Industry:
- Health & Medicine (0.76)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.57)
- Robots (0.57)
- Information Technology > Artificial Intelligence