Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines

The Independent - Tech

Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots. The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future. The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


Tesla shares plunge after Elon Musk smokes joint on Joe Rogan podcast

Daily Mail - Science & tech

Tesla shares have plunged this morning after Elon Musk smoked marijuana and drank whiskey while discussing everything from drugs to the possibility we're all living in a simulation, in a rambling two-and-a-half hour podcast appearance which was live-streamed on YouTube. The 47-year-old billionaire went on the Joe Rogan Experience late on Thursday night and accepted a joint from the host - after a rambling conversation that also took in the dangers of AI and the possibility China is spying on US citizens through their phones. Hours later, the company's chief accounting officer Dave Morton resigned citing'public attention' on the company. Meanwhile, shares plummeted to nine per cent this morning, wiping $4.3 billion off the company's value. By close of trading they had slightly recovered to a 6.3 per cent drop, reducing the company's value by $3.1bn. It follows weeks of serious turbulence for both Musk and Tesla, after he falsely announced he was taking the company private in a deal with Saudi Arabia and accused a British hero diver of being a paedophile.


AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight ZDNet

#artificialintelligence

Video: Is regulating AI a bad idea? Entrepreneur Elon Musk has long held the position that innovators need to be aware of the social risk artificial intelligence (AI) presents to the future, but at South by Southwest (SXSW) on Sunday, the SpaceX founder pieced together his plan for the second coming of the Dark Ages, noting AI "scares the hell" out of him. Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them. Making an appearance on a couch with his friend, creator of science fiction western series Westworld Jonathan Nolan, Musk said that although he's not usually an advocate for regulation and oversight, the AI proposition is where he can make an exception.


World calls for international treaty to stop killer robots before rogue states acquire them

The Independent - Tech

There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.


Google's Eric Schmidt says Hollywood-driven AI fears as unrealistic

#artificialintelligence

We are all familiar with the doomsday scenario depicted by many modern films, when artificial intelligence goes bad and takes over the world. But this is not going to happen, according to Google chairman, Eric Schmidt, who claims that super-intelligent robots will someday help use solve problems such as population growth and climate change. During a talk in Cannes, he said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry. Artificial intelligence will let scientists solve some of the world's'hard problems.' During a talk in Cannes, Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.