Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines

The Independent - Tech

Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots. The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future. The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


This start-up is building a humanoid robot that could soon be delivering packages to your door

#artificialintelligence

So far, Agility Robotics has sold three Cassie robots (University of Michigan is a customer, for example) and has sales for another three in progress. The goal is to sell another six Cassie robots, "so optimistically 12 customers total for the entire production run of Cassie," Shelton tells CNBC Make It. "That is obviously, though, a relatively compact market, and is not why we're doing the company," says Shelton, in an interview with CNBC Make It. Indeed, the next generation of the company's legged robots will also have arms, says Shelton. And one target use for the more humanoid robot will be carrying packages from delivery trucks to your door. Shelton says his house is a perfect example of how a legged robot would assist in delivery.



The Morning After: Robot dogs and Audi's electric supercar

Engadget

Along with Nikon's new camera, we also have hands-on impressions of some new drones, a supercar and Sony's new Aibo. Sony's robot dog is back, and the new model will arrive in the US later this year. Pre-orders open in September for the $2,899 First Litter Edition with accessories and three years of cloud services included. Devindra Hardawar saw a few of the AI-powered pups at an NYC event and found that "if you're an early adopter, or someone allergic to most animals, it might just fill the fur baby-sized hole in your heart." Yesterday drone behemoth DJI didn't just reveal the Mavic 2 Pro, it also introduced a second option in the line: the Mavic 2 Zoom.


UAV-GESTURE: A Dataset for UAV Control and Gesture Recognition

arXiv.org Machine Learning

Current UAV-recorded datasets are mostly limited to action recognition and object tracking, whereas the gesture signals datasets were mostly recorded in indoor spaces. Currently, there is no outdoor recorded public video dataset for UAV commanding signals. Gesture signals can be effectively used with UAVs by leveraging the UAVs visual sensors and operational simplicity. To fill this gap and enable research in wider application areas, we present a UAV gesture signals dataset recorded in an outdoor setting. We selected 13 gestures suitable for basic UAV navigation and command from general aircraft handling and helicopter handling signals. We provide 119 high-definition video clips consisting of 37151 frames. The overall baseline gesture recognition performance computed using Pose-based Convolutional Neural Network (P-CNN) is 91.9 %. All the frames are annotated with body joints and gesture classes in order to extend the dataset's applicability to a wider research area including gesture recognition, action recognition, human pose recognition and situation awareness.