Visual gesture-based robot guidance with a modular neural system

Littmann, Enno, Drees, Andrea, Ritter, Helge

Neural Information Processing Systems 

We report on the development of the modular neural system "SEE EAGLE" for the visual guidance of robot pick-and-place actions. Several neural networks are integrated to a single system that visually recognizes human hand pointing gestures from stereo pairs of color video images. The output of the hand recognition stage is processed by a set of color-sensitive neural networks to determine the cartesian location of the target object that is referenced by the pointing gesture. Finally, this information is used to guide a robot to grab the target object and put it at another location that can be specified by a second pointing gesture. The accuracy of the current system allows to identify the location of the referenced target object to an accuracy of 1 cm in a workspace area of 50x50 cm.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found