Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines

The Independent - Tech

Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots. The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future. The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


This start-up is building a humanoid robot that could soon be delivering packages to your door

#artificialintelligence

So far, Agility Robotics has sold three Cassie robots (University of Michigan is a customer, for example) and has sales for another three in progress. The goal is to sell another six Cassie robots, "so optimistically 12 customers total for the entire production run of Cassie," Shelton tells CNBC Make It. "That is obviously, though, a relatively compact market, and is not why we're doing the company," says Shelton, in an interview with CNBC Make It. Indeed, the next generation of the company's legged robots will also have arms, says Shelton. And one target use for the more humanoid robot will be carrying packages from delivery trucks to your door. Shelton says his house is a perfect example of how a legged robot would assist in delivery.


US Air Force funds Explainable-AI for UAV tech

#artificialintelligence

Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.


AI guides single-camera drone through hallways it's never seen before

#artificialintelligence

Researchers at the University of Colorado recently demonstrated a system that helps robots figure out the direction of hiking trails from camera footage, and scientists at ETH Zurich described in a January paper a machine learning framework that aids four-legged robots in getting up from the ground when they trip and fall. But might such AI perform just as proficiently when applied to a drone rather than machines planted firmly on the ground? A team at the University of California at Berkeley set out to find out. In a newly published paper on the preprint server Arxiv ("Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight"), the team proposes a "hybrid" deep reinforcement learning algorithm that combines data from both a digital simulation and the real world to guide a quadcopter through carpeted corridors. "In this work, we … aim to devise a transfer learning algorithm where the physical behavior of the vehicle is learned," the paper's authors wrote.


Scientists Warn AI Can Be Dangerous as Well as Helpful to Humans

#artificialintelligence

Artificial intelligence, or AI, no longer simply exists in science fiction movies and books. Scientists warn AI has and will continue to change almost every aspect of how people conduct business and live. Researchers say artificial intelligence can be a threat, as well as helpful, to humans. From the iPhone personal assistant Siri, to doing searches on the Internet, to the autopilot function, simple artificial intelligence has been around for some time, but is quickly getting more complex and more intelligent. "If we are going to make systems that are going to be more intelligent than us, it's absolutely essential for us to understand how to absolutely guarantee that they only do things that we are happy with," said Stuart Russell, computer science professor at the University of California Berkeley.