Goto

Collaborating Authors

 harm human


Practicality of Issac Asimov's Three Laws

#artificialintelligence

Like many other science fiction writers in the 1940s and 1950s, Asimov was greatly influenced by ideas from hard science fiction writer Robert A. Heinlein about what future societies might look like. Heinlein's "Future History" series described a set of laws that were supposed to guide the behavior of citizen-soldiers in his future society. In Asimov's books, he explained that the Three Laws that were incorporated into virtually all robots within the fictional universe, so much so that breaking the law was viewed as an unthinkable violation of one's programming. In many cases, a robot found guilty of having broken the Laws would be dismantled for disposal. In essence, these three principles allow for a reliable set of guidelines for robots but do not prevent them from having significant impact on society -- which is what Asimov intended all along.


It would be impossible to pull the plug on AI that wanted to harm humans, scientists warn

Daily Mail - Science & tech

The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it. A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity. However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm's own operations. Iyad Rahwan, Director of the Center for Humans and Machines, said: 'If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.' 'In effect, this makes the containment algorithm unusable.' AI has been fascinating humans for years, as we are in awe by machines that control cars, compose symphonies or beat the world's best chess player at their own game.


Superintelligence Cannot be Contained: Lessons from Computability Theory

Alfonseca, Manuel (Universidad Autonoma de Madrid) | Cebrian, Manuel (Center for Humans & Machines, Max-Planck Institute for Human Development) | Fernandez Anta, Antonio (IMDEA Networks Institute) | Coviello, Lorenzo (Google, USA) | Abeliuk, Andrés (USC Information Sciences Institute) | Rahwan, Iyad (Center for Humans & Machines, Max-Planck Institute for Human Development)

Journal of Artificial Intelligence Research

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible. "Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do." Alan Turing (1950), Computing Machinery and Intelligence, Mind, 59, 433-460


Home and factory robots can be hacked to harm humans

Engadget

Last month, cybersecurity firm IOActive let everyone know that Segway MiniPro hoverboards were vulnerable to hacks and outside control via their Bluetooth connections. Now it has revealed that industrial robots from Universal Robots and consumer models from Softbank Group and UBTech Robotics also have some troubling security flaws that can allow hackers to "modify safety settings, violating applicable safety laws and, consequently, causing physical harm to the robot's surroundings by moving it arbitrarily," according to a report published by the company today. The devices produced by Universal Robots are uncaged industrial robots meant to work with humans. Safety features are put in place to make sure working alongside the robots is safe for humans, but IOActive was able to override those features after hacking into the software. The company told Bloomberg that with these robots, "even running at low speeds, their force is more than sufficient to cause a skull fracture."


Thou shalt not kill: Official guidelines to keep humans safe from robots are published by standards authority

Daily Mail - Science & tech

The science fiction author Isaac Asimov first proposed the'Three Laws of Robotics' in a short story published in 1942 as a way of ensuring the machines would not rise up to overthrow humanity. But with robots now starting to appear in people's homes and artificial intelligence developing, a group of experts have drawn up a new list of rules to protect humanity from their creations. The British Standards Institution, which develops technical and quality guidelines for goods sold in the UK and issues the famous Kitemark certificate, has drawn up a new standard for robots. The British Standards Institution have drawn up new guidelines for robotics designers to help ensure robots do not cause a risk to humans. They say the growing use of robots in homes, shops (robot shop assistant Pepper pictured) and industry poses an'ethical hazard' It states that robots should not be designed to kill or harm humans, echoing Asimov's first law of robotics.


Do no harm, don't discriminate: official guidance issued on robot ethics

#artificialintelligence

Isaac Asimov gave us the basic rules of good robot behaviour: don't harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots. The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider. Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented "the first step towards embedding ethical values into robotics and AI".


Superintelligence cannot be contained: Lessons from Computability Theory

Alfonseca, Manuel, Cebrian, Manuel, Anta, Antonio Fernandez, Coviello, Lorenzo, Abeliuk, Andres, Rahwan, Iyad

arXiv.org Artificial Intelligence

The Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible.