Why does advanced AI want not to be shut down? - LessWrong

#artificialintelligence 

I've always been pretty confused about this. The standard AI risk scenarios usually (though I think not always) suppose that advanced AI wants not to be shut down. As commonly framed, the AI will fool humanity into believing it is aligned so as not to be turned off, until - all at once - it destroys humanity and gains control over all earth's resources. But why does the AI want not to be shut down? The motivation behind a human wanting not to die comes from evolution.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found