One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.
Should the government regulate artificial intelligence? That was the central question of the first White House workshop on the legal and governance implications of AI, held in Seattle on Tuesday. "We are observing issues around AI and machine learning popping up all over the government," said Ed Felten, White House deputy chief technology officer. "We are nowhere near the point of broadly regulating AI … but the challenge is how to ensure AI remains safe, controllable, and predictable as it gets smarter." One of the key aims of the workshop, said one of its organizers, University of Washington law professor Ryan Calo, was to help the public understand where the technology is now and where it's headed.
Artificial intelligence and robots are hot topics right now, but will we ever get to the stage we saw 50 years ago on "The Jetsons," where your typical household could have a robotic maid named Rosie? Robotics pioneer David Hanson says yes, and he thinks it'll take less than 50 more years. That's the prediction he delivered on Wednesday during a Skype-enabled panel presentation on the future of AI and robotics in Seattle, sponsored by the MIT Enterprise Forum of the Northwest. A veteran of Disney's imagineering operation, Hanson has produced custom-made robot heads that are capable of eerily humanlike expressions. Now Hanson has relocated to Hong Kong, where he's gearing up to unveil a line of production-model robots that take advantage of recent AI advances as well as the toymaking prowess of the Pearl River Delta.
Isaac Asimov gave us the basic rules of good robot behaviour: don't harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots. The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider. Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented "the first step towards embedding ethical values into robotics and AI".
Smart robots seem to be everywhere. Whether they're performing surgery, trouncing Go champions or generating dreamy artwork, computers programmed to learn on their own are growing more intelligent by the day. Southwestern Law School professor Ryan Abbott believes that computers are even generating patentable subject matter. We just don't know about it, he says, because disclosing it on an application might render the invention unpatentable. "Now that very large companies like IBM, Pfizer and Google are investing heavily in creative computing, it's going to play a much greater role in innovation in the future," he says.