Yesterday, artificial intelligence(AI) powerhouse OpenAI astonished the world by unveiling a prototype of a robotic arm that could solve a Rubik's cube with one hand. The prototype didn't only represent a milestone for the robotics ecosystem in solving high complexity tasks that actively require sensorial information but it also resulted on a major achievement for the AI community. The reason is that the OpenAI robot was completely trained using simulations based on the reinforcement learning models that the OpenAI Five system used to beat human players in Dota2. The research was discussed in a paper that accompanied the news. The importance of OpenAI's achievement was not about designing a robot that could solve a Rubik's cube.
Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they come up with is very different than an algorithm designed to solve the toy from any position.
A pair of hardware hackers have beat the world record for solving a Rubik's cube robotically, completing the task in almost half the time. The Guinness World Record was set just over a year ago by a Hungarian architect and his'Sub1 Reloaded' machine when it solved a Rubik's cube in 0.637 seconds. That record, however, has now been demolished. Software developer Jared Di Carlo and MIT Biometrics Lab Master's student Ben Katz devised a contraption that can solve a Rubik's cube in a stunning 0.38 seconds. Software developer Jared Di Carlo and MIT Biometrics Lab Master's student Ben Katz built a'Rubik's Contraption' that's capable of solving the complicated puzzle in a mere 0.38 seconds The researchers discovered that they could easily beat the world record by using a different kind of motor on their'Rubik's Contraption.' 'We noticed that all of the fast Rubik's Cube solvers were using stepper motors, and thought that we could do better if we used better motors,' Di Carlo wrote in a blog post.
We've trained a pair of neural networks to solve the Rubik's Cube with a human-like robot hand. The neural networks are trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. This shows that reinforcement learning isn't just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity. Human hands let us solve a wide variety of tasks.
In recent years, a growing number of researchers have explored the use of robotic arms or dexterous hands to solve a variety of everyday tasks. While many of them have successfully tackled simple tasks, such as grasping or basic manipulation, complex tasks that involve multiple steps and precise/strategic movements have so far proved harder to address. A team of researchers at the Chinese University of Hong Kong and Tencent AI Lab has recently developed a deep learning-based approach to solve a Rubik's Cube using a multi-fingered dexterous hand. Their approach, presented in a paper pre-published on arXiv, allows a dexterous hand to solve more advanced in-hand manipulation tasks, such as the renowned Rubik's Cube puzzle. A Rubik's Cube is a plastic cube covered in multi-colored squares that can be shifted into different positions.