US has 'moral imperative' to develop AI weapons, says panel
The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence (AI) software, a government-appointed panel has said in a draft report for Congress. The panel, led by former Google chief executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world's biggest military power should consider AI for national security and technological advancement. Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification. "It is a moral imperative to at least pursue this hypothesis," he said. For about eight years, a coalition of non-governmental organisations has pushed for a treaty banning "killer robots", saying human control is necessary to judge attacks' proportionality and assign blame for war crimes.
Jan-26-2021, 22:38:01 GMT
- AI-Alerts:
- 2021 > 2021-02 > AAAI AI-Alert Ethics for Feb 23, 2021 (1.00)
- Country:
- Asia (0.35)
- North America > United States (0.38)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.88)
- Robots (1.00)
- Information Technology > Artificial Intelligence