AI: Could It Be More Ethical Than Humans? – Analysis
Artificial intelligence in autonomous systems (i.e., drones) can address human error and fatigue issues, but also, in the future, concerns over ethical behaviour on the battlefield. Installing an algorithmic "moral compass" in AI, however, will be challenging. A common theme among many discussions concerning the military uses of artificial intelligence (AI) is the "Skynet" trope: the fear that AI will be self-aware and decide to turn on its masters. Inherent in this argument is the contention that AI does not share the same ethical constraints that humans do. While almost certainly an over-exaggeration, the Skynet scenario does highlight the problem of ensuring that the ethical behaviour we believe is incumbent on humans in combat is not lost as we increasingly devolve battlefield decision-making to autonomous systems.
Dec-12-2019, 14:27:04 GMT
- Country:
- Asia
- Middle East > Iraq (0.05)
- Singapore (0.05)
- Vietnam (0.05)
- North America > United States (0.30)
- Asia
- Industry:
- Technology:
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.71)