AI researchers test a robot's dexterity by handing it a Rubik's cube

#artificialintelligence

Humans can manipulate Rubik's cubes with relative ease, but robots have historically had a tougher go of it. That's not to suggest there aren't exceptions to the rule -- an MIT invention recently solved a cube in a record-breaking 0.38 seconds -- but they typically involve purpose-built motors and controls. Encouragingly, a group of researchers at Tencent and the Chinese University of Hong Kong say they've designed a Rubik's cube manipulator that uses multi-fingered hands. "Dexterous in-hand manipulation is a key building block for robots to achieve human-level dexterity, and accomplish everyday tasks which involve rich contact," wrote the researchers. "Despite concerted progress, reliable multi-fingered dexterous hand manipulation has remained an open challenge, due to its complex contact patterns, high dimensional action space, and fragile mechanical structure."


This five-fingered robot hand learns to get a grip on its own

#artificialintelligence

Robots today can perform space missions, solve a Rubik's cube, sort hospital medication and even make pancakes. But most can't manage the simple act of grasping a pencil and spinning it around to get a solid grip. Intricate tasks that require dexterous in-hand manipulation--rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands--have proved notoriously difficult for robots. Now, a University of Washington team of computer science and engineering researchers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it. Their latest results are detailed in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.


Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost

arXiv.org Artificial Intelligence

Dexterous multi-fingered robotic hands can perform a wide range of manipulation skills, making them an appealing component for general-purpose robotic manipulators. However, such hands pose a major challenge for autonomous control, due to the high dimensionality of their configuration space and complex intermittent contact interactions. In this work, we propose deep reinforcement learning (deep RL) as a scalable solution for learning complex, contact rich behaviors with multi-fingered hands. Deep RL provides an end-to-end approach to directly map sensor readings to actions, without the need for task specific models or policy classes. We show that contact-rich manipulation behavior with multi-fingered hands can be learned by directly training with model-free deep RL algorithms in the real world, with minimal additional assumption and without the aid of simulation. We learn a variety of complex behaviors on two different low-cost hardware platforms. We show that each task can be learned entirely from scratch, and further study how the learning process can be further accelerated by using a small number of human demonstrations to bootstrap learning. Our experiments demonstrate that complex multi-fingered manipulation skills can be learned in the real world in about 4-7 hours for most tasks, and that demonstrations can decrease this to 2-3 hours, indicating that direct deep RL training in the real world is a viable and practical alternative to simulation and model-based control. \url{https://sites.google.com/view/deeprl-handmanipulation}


Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

arXiv.org Artificial Intelligence

Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform a multitude of tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Consequently, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, which enables learning with sample sizes equivalent to a few hours of robot experience. The use of demonstrations result in policies that exhibit very natural movements and, surprisingly, are also substantially more robust.


This five-fingered robot hand learns to get a grip on its own

#artificialintelligence

This five-fingered robot hand can learn how to perform dexterous manipulation -- like spinning a tube full of coffee beans -- on its own, rather than having humans program its actions.University of Washington Robots today can perform space missions, solve a Rubik's cube, sort hospital medication and even make pancakes. But most can't manage the simple act of grasping a pencil and spinning it around to get a solid grip. Intricate tasks that require dexterous in-hand manipulation -- rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands -- have proved notoriously difficult for robots. Now, a University of Washington team of computer scientists and engineers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it. Their latest results are detailed in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.