Russell, Bostrom and the Risk of AI

#artificialintelligence 

Now I suppose it is possible that once an agent reaches a sufficient level of intellectual ability it derives some universal morality from the ether and there really is nothing to worry about, but I hope you agree that this is, at the very least, not a conservative assumption. For the purposes of this article I will take the orthogonality thesis as a given. So a smarter than human artificial intelligence can have any goal.