Deep dynamics models for dexterous manipulation

Robohub

Dexterous manipulation with multi-fingered hands is a grand challenge in robotics: the versatility of the human hand is as yet unrivaled by the capabilities of robotic systems, and bridging this gap will enable more general and capable robots. Although some real-world tasks (like picking up a television remote or a screwdriver) can be accomplished with simple parallel jaw grippers, there are countless tasks (like functionally using the remote to change the channel or using the screwdriver to screw in a nail) in which dexterity enabled by redundant degrees of freedom is critical. In fact, dexterous manipulation is defined as being object-centric, with the goal of controlling object movement through precise control of forces and motions -- something that is not possible without the ability to simultaneously impact the object from multiple directions. For example, using only two fingers to attempt common tasks such as opening the lid of a jar or hitting a nail with a hammer would quickly encounter the challenges of slippage, complex contact forces, and underactuation. Although dexterous multi-fingered hands can indeed enable flexibility and success of a wide range of manipulation skills, many of these more complex behaviors are also notoriously difficult to control: They require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects.


Robot hand learns to become more dexterous

#artificialintelligence

Pianists, surgeons, typists, gamers and baton-twirlers all learn to use their hands more skillfully as they ply their trade, but what about robots? Researchers at the University of Washington say they've developed a five-fingered robot hand that's more capable than ours, and can learn to handle objects better and better without human intervention. The ADROIT Manipulation Platform draws upon machine learning and real-world feedback to improve its performance, rather than relying on its programmers to specify its every move. "Such dynamic dexterous manipulation with free objects has never been demonstrated before even in simulation, let along the physical hardware results we have," Vikash Kumar, a UW doctoral student in computer science and engineering, told GeekWire in an email. Kumar and his colleagues discuss the project in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.


Learning Dexterous In-Hand Manipulation

arXiv.org Artificial Intelligence

We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object's appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train OpenAI Five. We also include a video of our results: https://youtu.be/jwSbzNHGflM


Learning Hierarchical Control for Robust In-Hand Manipulation

arXiv.org Artificial Intelligence

Tingguang Li 1, 2, Krishnan Srinivasan 2, Max Qing-Hu Meng 1, Wenzhen Y uan 3 and Jeannette Bohg 2 Abstract -- Robotic in-hand manipulation has been a longstanding challenge due to the complexity of modelling hand and object in contact and of coordinating finger motion for complex manipulation sequences. T o address these challenges, the majority of prior work has either focused on model-based, low-level controllers or on model-free deep reinforcement learning that each have their own limitations. We propose a hierarchical method that relies on traditional, model-based controllers on the low-level and learned policies on the mid-level. The low-level controllers can robustly execute different manipulation primitives (reposing, sliding, flipping). We extensively evaluate our approach in simulation with a 3-fingered hand that controls three degrees of freedom of elongated objects. We show that our approach can move objects between almost all the possible poses in the workspace while keeping them firmly grasped. We also show that our approach is robust to inaccuracies in the object models and to observation noise. Finally, we show how our approach generalizes to objects of other shapes. I NTRODUCTION Dexterous Manipulation refers to the ability of changing the pose of an object to any other pose within the workspace of a hand [1-3]. In this paper, we are particularly concerned with the ability of in-hand manipulation where the object is continuously moved within the hand without dropping. This ability is used frequently in human manipulation e.g. when grasping a tool and readjusting it within the hand, when inspecting an object, when assembling objects or when adjusting an unstable grasp. Y et, in-hand manipulation remains a longstanding challenge in robotics despite the availability of multi-fingered dexterous hands such as [4-6].


This five-fingered robot hand is close to human in functionality

#artificialintelligence

This five-fingered robot hand developed by University of Washington computer science and engineering researchers can learn how to perform dexterous manipulation -- like spinning a tube full of coffee beans -- on its own, rather than having humans program its actions. A University of Washington team of computer scientists and engineers has built what they say is one of the most highly capable five-fingered robot hands in the world. It can perform dexterous manipulation and learn from its own experience without needing humans to direct it. Their work is described in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation. "Hand manipulation is one of the hardest problems that roboticists have to solve," said lead author Vikash Kumar, a UW doctoral student in computer science and engineering.