Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines

The Independent - Tech

Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots. The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future. The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.



Why We Must Not Build Automated Weapons of War

#artificialintelligence

Over 100 CEOs of artificial intelligence and robotics firms recently signed an open letter warning that their work could be repurposed to build lethal autonomous weapons -- "killer robots." They argued that to build such weapons would be to open a "Pandora's Box." This could forever alter war. Over 30 countries have or are developing armed drones, and with each successive generation, drones have more autonomy. Automation has long been used in weapons to help identify targets and maneuver missiles.


Podcast: Law and Ethics of Artificial Intelligence - Future of Life Institute

#artificialintelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology. In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.


How to train your ROBOT

#artificialintelligence

Robots are like dogs because, according to some experts, a badly-trained robot could end up misbehaving just like a badly-trained dog. This warning came at a meeting discussing the future of robot and human interactions, held in London this week. But the panel, who emphasised the importance of regulations controlling AI, agreed a doomsday situation in which robots take over is unlikely to happen soon. Robots are like dogs because, according to some experts, a badly-trained robot could end up misbehaving just like a badly-trained dog. Organised by the EPSRC UK Robotics and Autonomous Systems Network (UK-RAS Network), UK Robotics Week included a series of events across the country, aiming to get the public engaged with the developments and debate in and around robotics.