Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they come up with is very different than an algorithm designed to solve the toy from any position.
A deep-learning algorithm has been developed which can solve the Rubik's cube faster than any human can. It never fails to complete the puzzle, with a 100 per cent success rate and managing it in around 20 moves. Humans can beat the AI's mark of 18 seconds, the world record is around four seconds, but it is far more inefficient and people often require around 50 moves. It was created by University of California Irvine and can be tried out here. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration.
Researchers have developed an AI algorithm which can solve a Rubik's cube in a fraction of a second, according to a study published in the journal Nature Machine Intelligence. The system, known as DeepCubeA, uses a form of machine learning which teaches itself how to play in order to crack the puzzle without being specifically coached by humans. "Artificial intelligence can defeat the world's best human chess and Go players, but some of the more difficult puzzles, such as the Rubik's Cube, had not been solved by computers, so we thought they were open for AI approaches," Pierre Baldi, one of the developers of the algorithm and computer scientist from the University of California, Irvine, said in a statement. According to Baldi, the latest development could herald a new generation of artificial intelligence (AI) deep-learning systems which are more advanced than those used in commercially available applications such as Siri and Alexa. "These systems are not really intelligent; they're brittle, and you can easily break or fool them," Baldi said.
Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities. The team from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, used a branch of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Using that method, the team achieved the maximum score possible of 999,990. Doina Precup, an associate professor of computer science at McGill University in Montreal said that's a significant achievement among AI researchers, who have been using various videogames to test their systems but have found Ms. Pac-Man among the most difficult to crack. But Precup said she was impressed not just with what the researchers achieved but with how they achieved it.
In this paper we present our approach of improving the traditional alpha-beta search process for strategic board games by modifying the method in two ways: 1) forgoing the evaluation of leaf nodes that are not terminal states and 2) employing a utility table that stores the utility for subsets of board configurations. In this paper we concentrate our efforts on the game of Connect Four. Our results have shown significant speedup, as well as a framework that relaxes common agent assumptions in game search. In addition, it allows game designers to easily modify the agent's strategy by changing the goal from dominance to interaction.