Goto

Collaborating Authors

 containment algorithm


Containment algorithms won't stop super-intelligent AI, scientists warn

#artificialintelligence

A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI. Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests? The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI's behavior, and halting the program if its actions became harmful. If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.


Humans won't be able to control a superintelligent AI, according to a study

#artificialintelligence

It may not be theoretically possible to predict the actions of artificial intelligence, according to researchers from the Max-Planck Institute for Humans and Machines. "A super-intelligent machine that controls the world sounds like science fiction," said Manuel Cebrian, co-author of the study and leader of the research group. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it [sic]." Our society is moving increasingly towards a reliance on artificial intelligence -- from AI-run interactive job interviews to creating music and even memes, AI is already very much part of everyday life. According to the research group's study, published in the Journal of Artificial Intelligence Research, to predict an AI's actions, a simulation of that exact superintelligence would need to be made.


Humans risk being unable to control artificial intelligence, scientists fear

#artificialintelligence

The Daily Star's FREE newsletter is spectacular! Scientists have warned that humanity risks losing control of Artificial Intelligence if it keeps developing. AI software is becoming more common, with companies such as Amazon trialling self-automated vehicles. Experts recently made a major breakthrough with a revolutionary new AI system that never stops learning. But as technology develops, an international group of researchers have warned that there are increasing dangers of standalone software. In a study published in the Journal of Arititial Intelligence Research Portal, author Manuel Cebrain said: "A super-intelligent machine that controls the world sounds like science fiction.


Containment algorithms won't stop super-intelligent AI, scientists warn

#artificialintelligence

A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI. Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests? The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI's behavior, and halting the program if its actions became harmful. If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.


It would be impossible to pull the plug on AI that wanted to harm humans, scientists warn

Daily Mail - Science & tech

The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it. A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity. However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm's own operations. Iyad Rahwan, Director of the Center for Humans and Machines, said: 'If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.' 'In effect, this makes the containment algorithm unusable.' AI has been fascinating humans for years, as we are in awe by machines that control cars, compose symphonies or beat the world's best chess player at their own game.

  Country: Asia > North Korea (0.06)
  Industry: Leisure & Entertainment > Games > Chess (0.56)