Divide and conquer: How Microsoft researchers used AI to master Ms. Pac-Man - Next at Microsoft

#artificialintelligence

Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities. The team from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, used a branch of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Using that method, the team achieved the maximum score possible of 999,990. Doina Precup, an associate professor of computer science at McGill University in Montreal said that's a significant achievement among AI researchers, who have been using various videogames to test their systems but have found Ms. Pac-Man among the most difficult to crack. But Precup said she was impressed not just with what the researchers achieved but with how they achieved it.


AI teaches itself to complete the Rubik's cube in just 20 MOVES

Daily Mail - Science & tech

A deep-learning algorithm has been developed which can solve the Rubik's cube faster than any human can. It never fails to complete the puzzle, with a 100 per cent success rate and managing it in around 20 moves. Humans can beat the AI's mark of 18 seconds, the world record is around four seconds, but it is far more inefficient and people often require around 50 moves. It was created by University of California Irvine and can be tried out here. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration.


Researchers' deep learning algorithm solves Rubik's Cube faster than any human

#artificialintelligence

Since its invention by a Hungarian architect in 1974, the Rubik's Cube has furrowed the brows of many who have tried to solve it, but the 3-D logic puzzle is no match for an artificial intelligence system created by researchers at the University of California, Irvine. DeepCubeA, a deep reinforcement learning algorithm programmed by UCI computer scientists and mathematicians, can find the solution in a fraction of a second, without any specific domain knowledge or in-game coaching from humans. This is no simple task considering that the cube has completion paths numbering in the billions but only one goal state--each of six sides displaying a solid color--which apparently can't be found through random moves. For a study published today in Nature Machine Intelligence, the researchers demonstrated that DeepCubeA solved 100 percent of all test configurations, finding the shortest path to the goal state about 60 percent of the time. The algorithm also works on other combinatorial games such as the sliding tile puzzle, Lights Out and Sokoban.


A method to introduce emotion recognition in gaming

#artificialintelligence

Virtual Reality (VR) is opening up exciting new frontiers in the development of video games, paving the way for increasingly realistic, interactive and immersive gaming experiences. VR consoles, in fact, allow gamers to feel like they are almost inside the game, overcoming limitations associated with display resolution and latency issues. An interesting further integration for VR would be emotion recognition, as this could enable the development of games that respond to a user's emotions in real time. With this in mind, a team of researchers at Yonsei University and Motion Device Inc. have recently proposed a deep-learning-based technique that could enable emotion recognition during VR gaming experiences. Their paper was presented at the 2019 IEEE Conference on Virtual Reality and 3-D User Interfaces.


UC Irvine Deep Learning Machine Teaches Itself To Solve A Rubik's Cube

#artificialintelligence

Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they come up with is very different than an algorithm designed to solve the toy from any position.