Google DeepMind Researchers Develop AI Kill Switch

#artificialintelligence 

Artificial intelligence doesn't have to include murderous, sentient super-intelligence to be dangerous. If a machine can learn based on real-world inputs and adjust its behaviors accordingly, there exists the potential for that machine to learn the wrong thing. If a machine can learn the wrong thing, it can do the wrong thing. Laurent Orseau and Stuart Armstrong, researchers at Google's DeepMind and the Future of Humanity Institute, respectively, have developed a new framework to address this in the form of "safely interruptible" artificial intelligence. In other words, their system, which is described in a paper to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence, guarantees that a machine will not learn to resist attempts by humans to intervene in the its learning processes.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found