Why aligning AI to our values may be harder than we think

#artificialintelligence 

Plenty of scientists, philosophers, and science fiction writers have wondered how to keep a potential super-human AI from destroying us all. While the obvious answer of "unplug it if it tries to kill you" has many supporters (and it worked on the HAL 9000), it isn't too difficult to imagine that a sufficiently advanced machine would be able to prevent you from doing that. Alternatively, a very powerful AI might be able to make decisions too rapidly for humans to review for ethical correctness or to correct for the damage they cause. The issue of keeping a potentially super-human AI from going rogue and hurting people is called the "control problem," and there are many potential solutions to it. One of the more frequently discussed is "alignment" and involves syncing AI to human values, goals, and ethical standards.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found