We can't ban killer robots – it's already too late Philip Ball

#artificialintelligence

One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.


Robot Kanye will free you from the human labor of listening to the real thing

#artificialintelligence

Before Kanye West gets to the White House, first, we'll have to survive the robot apocalypse brought about by his A.I.-powered doppelgänger. It's a very real piece of software created by a high school student from West Virginia. You can use Alexa in Amazon's app now, and it's really smart Robbie Barrat, a 17-year-old hip-hop fan and coding whiz, taught himself to code using open source software, according to a report from Quartz. Initially, the software simply rearranged 6,000 Kanye rap phrases to create new songs, but now the software has been modified to create original rap lines using the Kanye word bank. On the YouTube Page demonstrating the software's ability, Barrat says, "Excluding the beat; this song was written 100 percent by a deep neural network."


What to Do When a Robot Is the Guilty Party

#artificialintelligence

Should the government regulate artificial intelligence? That was the central question of the first White House workshop on the legal and governance implications of AI, held in Seattle on Tuesday. "We are observing issues around AI and machine learning popping up all over the government," said Ed Felten, White House deputy chief technology officer. "We are nowhere near the point of broadly regulating AI … but the challenge is how to ensure AI remains safe, controllable, and predictable as it gets smarter." One of the key aims of the workshop, said one of its organizers, University of Washington law professor Ryan Calo, was to help the public understand where the technology is now and where it's headed.


Do no harm, don't discriminate: official guidance issued on robot ethics

#artificialintelligence

Isaac Asimov gave us the basic rules of good robot behaviour: don't harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots. The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider. Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented "the first step towards embedding ethical values into robotics and AI".


Patent Law at the AI Crossroads

#artificialintelligence

Smart robots seem to be everywhere. Whether they're performing surgery, trouncing Go champions or generating dreamy artwork, computers programmed to learn on their own are growing more intelligent by the day. Southwestern Law School professor Ryan Abbott believes that computers are even generating patentable subject matter. We just don't know about it, he says, because disclosing it on an application might render the invention unpatentable. "Now that very large companies like IBM, Pfizer and Google are investing heavily in creative computing, it's going to play a much greater role in innovation in the future," he says.