Artificial intelligence could 'evolve faster than the human race'

#artificialintelligence

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.


Professor Stephen Hawking warns of rogue robot rebellion evolving faster than humans

Daily Mail - Science & tech

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.


Artificial Intelligence Law is Here, Part One

#artificialintelligence

In the early to mid-90's while my friends were getting into Indie Rock, I was hacking away at robots and getting them to learn to map a room. A computer science graduate student, I programmed LISP algorithms for parsing nursing records in order to predict intervention codes. I was no less a nerd (or to put it a better way, a technology enthusiast) in law school, when I wrote about how natural language processing can improve legal research tools. I didn't put much thought, either as a computer scientist or law student to whether artificial intelligence (AI) should be regulated. Frankly, we were in such the early days of the technology, that AI regulations seemed like science fiction a la Isaac Asimov's three laws of robotics.


We can't ban killer robots – it's already too late Philip Ball

#artificialintelligence

One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.


How to train your ROBOT

#artificialintelligence

Robots are like dogs because, according to some experts, a badly-trained robot could end up misbehaving just like a badly-trained dog. This warning came at a meeting discussing the future of robot and human interactions, held in London this week. But the panel, who emphasised the importance of regulations controlling AI, agreed a doomsday situation in which robots take over is unlikely to happen soon. Robots are like dogs because, according to some experts, a badly-trained robot could end up misbehaving just like a badly-trained dog. Organised by the EPSRC UK Robotics and Autonomous Systems Network (UK-RAS Network), UK Robotics Week included a series of events across the country, aiming to get the public engaged with the developments and debate in and around robotics.