Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it. Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.
Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such second-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on'strong') controls. Operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience and to be able to render at least semi-autonomous decisions (more about this later).
This week, self-driving Tesla had a fatal crash. Other than that – a lot about robots, can AI create an art, cloning animals and more! Ray Kurzweil and people like him believe the Singularity is just behind the corner and promise the new perfect world. They are very optimistic about the future. But sometimes you should listen to the other side to better understand the problem or vision.
It cannot be denied that Artificial Intelligence is having a growing impact in many areas of human activity. It is helping humans to communicate with each other, even beyond linguistic boundaries, find information in the vast resources available on the web, solve challenging problems that go beyond the competence of a single expert, and enable the deployment of autonomous systems, such as self-driving cars, that handle complex interactions with the real world with little or no human intervention. These applications are perhaps not like the fully autonomous conscious intelligent robots that science fiction stories have been predicting, but they are nevertheless very important and useful, and most importantly they are real and here today. But neither can it be denied that Artificial Intelligence comes with certain risks. Many people (including luminaries such as Bill Gates or Stephen Hawking) believe that the main risk of artificial intelligence is that it gets out of hand.
One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.