Goto

Collaborating Authors

 tossingbot


Google Teaches Robot to Toss Bananas Better Than You Do

IEEE Spectrum Robotics

As anyone who's ever tried to learn how to throw something properly can attest to, it takes a lot of practice to be able to get it right. Once you have it down, though, it makes you much more efficient at a variety of weird tasks: Want to pass an orange ball through a hoop that's inconveniently far off of the ground? Want to knock some small sticks placed on top of large sticks with a ball? Want to move a telephone pole in Scotland? Most humans, unfortunately, aren't talented enough for the skills we've developed at throwing things for strange reasons to translate well to everyday practical tasks.


Now Google's robotics lab focuses on machine learning

#artificialintelligence

Google has teamed up with researchers from Princeton, Columbia and MIT to create TossingBot, which can learn how to pick up and toss various objects into the right containers on its own. During its first rodeo, the mechanical arm didn't know what to do with the pile of objects it was presented with. After 14 hours of trial and error and analyzing them with its overhead cameras, it was finally able to toss the right item into the right container 85 percent of the time. As the tech giant explains, programming a robot to properly grasp and toss specific objects -- a screwdriver, for instance, could land in different ways, based on where you hold it -- is incredibly difficult. By using machine learning, the robot will teach itself from experience instead, as well as adapt to new scenarios and learn on the fly.


TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

Zeng, Andy, Song, Shuran, Lee, Johnny, Rodriguez, Alberto, Funkhouser, Thomas

arXiv.org Artificial Intelligence

We investigate whether a robot arm can learn to pick and throw arbitrary objects into selected boxes quickly and accurately. Throwing has the potential to increase the physical reachability and picking speed of a robot arm. However, precisely throwing arbitrary objects in unstructured settings presents many challenges: from acquiring reliable pre-throw conditions (e.g. initial pose of object in manipulator) to handling varying object-centric properties (e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In this work, we propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (images of arbitrary objects in a bin) through trial and error. Within this formulation, we investigate the synergies between grasping and throwing (i.e., learning grasps that enable more accurate throws) and between simulation and deep learning (i.e., using deep networks to predict residuals on top of control parameters predicted by a physics simulator). The resulting system, TossingBot, is able to grasp and throw arbitrary objects into boxes located outside its maximum reach range at 500+ mean picks per hour (600+ grasps per hour with 85% throwing accuracy); and generalizes to new objects and target locations. Videos are available at https://tossingbot.cs.princeton.edu