Goto

Collaborating Authors

Deep Dynamics Models for Dexterous Manipulation

#artificialintelligence

Figure 1: Our approach (PDDM) can efficiently and effectively learn complex dexterous manipulation skills in both simulation and the real world. Here, the learned model is able to control the 24-DoF Shadow Hand to rotate two free-floating Baoding balls in the palm, using just 4 hours of real-world data with no prior knowledge/assumptions of system or environment dynamics. Dexterous manipulation with multi-fingered hands is a grand challenge in robotics: the versatility of the human hand is as yet unrivaled by the capabilities of robotic systems, and bridging this gap will enable more general and capable robots. Although some real-world tasks (like picking up a television remote or a screwdriver) can be accomplished with simple parallel jaw grippers, there are countless tasks (like functionally using the remote to change the channel or using the screwdriver to screw in a nail) in which dexterity enabled by redundant degrees of freedom is critical. In fact, dexterous manipulation is defined as being object-centric, with the goal of controlling object movement through precise control of forces and motions -- something that is not possible without the ability to simultaneously impact the object from multiple directions.


Robot hand learns to become more dexterous

#artificialintelligence

Pianists, surgeons, typists, gamers and baton-twirlers all learn to use their hands more skillfully as they ply their trade, but what about robots? Researchers at the University of Washington say they've developed a five-fingered robot hand that's more capable than ours, and can learn to handle objects better and better without human intervention. The ADROIT Manipulation Platform draws upon machine learning and real-world feedback to improve its performance, rather than relying on its programmers to specify its every move. "Such dynamic dexterous manipulation with free objects has never been demonstrated before even in simulation, let along the physical hardware results we have," Vikash Kumar, a UW doctoral student in computer science and engineering, told GeekWire in an email. Kumar and his colleagues discuss the project in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.


Learning Dexterous In-Hand Manipulation

arXiv.org Artificial Intelligence

We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object's appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train OpenAI Five. We also include a video of our results: https://youtu.be/jwSbzNHGflM


State-Only Imitation Learning for Dexterous Manipulation

arXiv.org Machine Learning

Dexterous manipulation has been a long-standing challenge in robotics. Recently, modern model-free RL has demonstrated impressive results on a number of problems. However, complex domains like dexterous manipulation remain a challenge for RL due to the poor sample complexity. To address this, current approaches employ expert demonstrations in the form of state-action pairs, which are difficult to obtain for real-world settings such as learning from videos. In this work, we move toward a more realistic setting and explore state-only imitation learning. To tackle this setting, we train an inverse dynamics model and use it to predict actions for state-only demonstrations. The inverse dynamics model and the policy are trained jointly. Our method performs on par with state-action approaches and considerably outperforms RL alone. By not relying on expert actions, we are able to learn from demonstrations with different dynamics, morphologies, and objects.


OpenAI's 'state-of-the-art' system gives robots humanlike dexterity

#artificialintelligence

OpenAI, a nonprofit, San Francisco-based AI research company backed by Elon Musk, Reid Hoffman, and Peter Thiel, among other titans of industry, made headlines in June when it announced that the latest version of its Dota 2-playing AI -- dubbed OpenAI Five -- managed to beat amateur players. Today, it unveiled another first: a robotics system that can manipulate objects with humanlike dexterity. In a forthcoming paper ("Dexterous In-Hand Manipulation"), OpenAI researchers describe a system that uses a reinforcement model, where the AI learns through trial and error, to direct robot hands in grasping and manipulating objects with state-of-the-art precision. All the more impressive, it was trained entirely digitally, in a computer simulation, and wasn't provided any human demonstrations by which to learn. "While dexterous manipulation of objects is a fundamental everyday task for humans, it is still challenging for autonomous robots," the team writes.