These 23 Principles Could Help Us Avoid an AI Apocalypse
Science fiction author Isaac Asimov famously predicted that we'll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we're developing AI responsibly and safely. At a recent gathering, a group of experts did just that, coming up with 23 principles to steer the development of AI in a positive direction--and to ensure it doesn't destroy us. The new guidelines, dubbed the 23 Asilomar AI Principles, touch upon issues pertaining to research, ethics, and foresight--from research strategies and data rights to transparency issues and the risks of artificial superintelligence. Previous attempts to establish AI guidelines, including efforts by the IEEE Standards Association, Stanford University's AI100 Standing Committee, and even the White House, were either too narrow in scope, or far too generalized.
Feb-3-2017, 13:35:04 GMT
- Country:
- Europe > Estonia
- Harju County > Tallinn (0.05)
- North America > United States
- California > Santa Cruz County > Santa Cruz (0.05)
- Europe > Estonia
- Industry:
- Government (0.89)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.83)
- Robots (1.00)
- Information Technology > Artificial Intelligence