Goto

Collaborating Authors

 jackrabbot


Robots Learn by Watching Human Behavior NVIDIA Blog

#artificialintelligence

Robots following coded instructions to complete a task? Robots learning to do things by watching how humans do it? Stanford's Animesh Garg and Marynel Vázquez shared their research in a talk on "Generalizable Autonomy for Robotic Mobility and Manipulation" at the GPU Technology Conference last week. In lay terms, generalizable autonomy is the idea that a robot can observe human behavior, and learn to imitate it in a way that's applicable to a variety of tasks and situations. Learning to cook by watching YouTube videos, for one.


Stanford University's Jackrabbot can navigate tricky pedestrians to make local deliveries

Daily Mail - Science & tech

Elbowing your way through crowds can be slow going, but our ability to weave and dodge through a throng of people comes almost as second nature. For robots, however, this simple task can prove a major obstacle that currently limits their usefulness in public places. But now, a team from Stanford University says it has managed to create droid which is able to navigate down streets without mowing down people walking in the opposite direction, which make them better at making deliveries. The Jackrabbot is a robot designed by a team from Stanford University. It takes its name from the nimble yet shy Jackrabbit, which is often found on the university's campus.


Video Friday: ATLAS on the Edge, Plant-Robot Hybrid, and Kuka Smash

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your edgy Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next two months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. IHMC has managed to get their ATLAS balancing on the edge of cinder blocks, balancing itself with outstretched arms as it does so. The robot is able to detect and explore partial footholds (in this case line contacts).


Stanford's 'Jackrabbot' robot will attempt to learn the arcane and unspoken rules of pedestrians

#artificialintelligence

It's hard enough for a grown human to figure out how to navigate a crowd sometimes -- so what chance does a clumsy and naive robot have? To prevent future collisions and awkward "do I go left or right" situations, Stanford researchers are hoping their "Jackrabbot" robot can learn the rules of the road. The team, part of the Computational Vision and Geometry Lab, has already been working on computer vision algorithms that track and aim to predict pedestrian movements. But the rules are so complex, and subject to so many variations depending on the crowd, the width of the walkway, the time of day, whether there are bikes or strollers involved -- well, like any machine learning task, it takes a lot of data to produce a useful result. Furthermore, the algorithm they are developing is intended to be based entirely on observed data as interpreted by a neural network; no tweaking by researchers adding cues obvious to them ("in this situation, a person will definitely go left") is allowed. Their efforts so far are detailed in a paper the team will present at CVPR later this month.