In what could one day help find missing people in forests, a team of researchers used deep learning to train an autonomous drone to navigate a previously-unseen trail in a densely wooded forest completely on its own. The researchers from Dalle Molle Institute for Artificial Intelligence, the University of Zurich, and NCCR Robotics, mounted three GoPro cameras to a headset to train their deep neural network and took over 20,000 pictures from hours of trail hikes in the Swiss Alps. These images were then used alongside an NVIDIA GeForce GTX 580 GPU to teach the model what the boundaries of a hiking trail generally look like. Check out the autonomous drone in action in the video below. The researchers claim the resulting deep learning network is even better than humans at determining the correct direction of the trails on which it travels, guessing the correct direction of a trail with 85 percent accuracy.
Trails are narrow ribbons of civilization cutting through wilderness. They are as much about what is worth exploring as they are about what's off limits. A hiker loses a trail, and suddenly they're in a deep wilderness, unmoored from the world until they stumble back to that thin filament again. To find missing hikers, it makes sense to look near trails, and to do that, a team at the University of Zurich is training drones to identify and follow trails into the woods. The drone used by the Swiss researchers observes the environment through a pair of small cameras, similar to those used in smartphones.
Technology that can mimic and improve on the cognitive abilities of human brain has been the stuff of dystopian movie storylines for decades. But for large companies and research labs, such artificial intelligence has been a longstanding pursuit for both day-to-day and groundbreaking uses. Now, a specific breakthrough in AI -- deep learning -- is allowing business to use the vast amounts of newly available data to teach computers how to learn. Deep learning uses layers of algorithms known as neural networks, which are designed to loosely represent the layers of the human brain. These algorithms allow machines to learn patterns.
Last Friday, we posted a bunch of videos from the AAAI Video Competition. There are lots of good videos (really, they're all good), and we didn't want to play favorites or otherwise influence your votes, so we didn't add much in the way of commentary or anything like that. But it's been almost a week, and a few of those videos are certainly worth taking a closer look at. First, we have a video accompanying "Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots," by Miguel Duarte, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, and Anders Lyhne Christensen, from the BioMachines Lab and Institute of Telecommunications, in Lisbon, Portugal. This video is fantastic because, among other reasons, I HAD THAT EXACT SAME PLAYMOBIL PIRATE SHIP WHEN I WAS A KID.
Broggini, Denis (Dalle Molle Institute for Artificial Intelligence (IDSIA) ) | Gromov, Boris (Dalle Molle Institute for Artificial Intelligence (IDSIA) ) | Giusti, Alessandro (Dalle Molle Institute for Artificial Intelligence (IDSIA)) | Gambardella, Luca Maria (Dalle Molle Institute for Artificial Intelligence (IDSIA))
We propose a learning-based system for detecting when a user performs a pointing gesture, using data acquired from IMU sensors, by means of a 1D convolutional neural network. We quantitatively evaluate the resulting detection accuracy, and discuss an application to a human-robot interaction task where pointing gestures are used to guide a quadrotor landing.