These 23 Principles Could Help Us Avoid an AI Apocalypse

#artificialintelligence 

Science fiction author Isaac Asimov famously predicted that we'll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we're developing AI responsibly and safely. At a recent gathering, a group of experts did just that, coming up with 23 principles to steer the development of AI in a positive direction--and to ensure it doesn't destroy us. The new guidelines, dubbed the 23 Asilomar AI Principles, touch upon issues pertaining to research, ethics, and foresight--from research strategies and data rights to transparency issues and the risks of artificial superintelligence. Previous attempts to establish AI guidelines, including efforts by the IEEE Standards Association, Stanford University's AI100 Standing Committee, and even the White House, were either too narrow in scope, or far too generalized.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found