A Generalized Multidimensional Evaluation Framework for Player Goal Recognition

AAAI Conferences

Recent years have seen a growing interest in player modeling, which supports the creation of player-adaptive digital games. A central problem of player modeling is goal recognition, which aims to recognize players’ intentions from observable gameplay behaviors. Player goal recognition offers the promise of enabling games to dynamically adjust challenge levels, perform procedural content generation, and create believable NPC interactions. A growing body of work is investigating a wide range of machine learning-based goal recognition models. In this paper, we introduce GOALIE, a multidimensional framework for evaluating player goal recognition models. The framework integrates multiple metrics for player goal recognition models, including two novel metrics, n-early convergence rate and standardized convergence point . We demonstrate the application of the GOALIE framework with the evaluation of several player goal recognition models, including Markov logic network-based, deep feedforward neural network-based, and long short-term memory network-based goal recognizers on two different educational games. The results suggest that GOALIE effectively captures goal recognition behaviors that are key to next-generation player modeling.


Divide and conquer: How Microsoft researchers used AI to master Ms. Pac-Man - Next at Microsoft

#artificialintelligence

Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities. The team from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, used a branch of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Using that method, the team achieved the maximum score possible of 999,990. Doina Precup, an associate professor of computer science at McGill University in Montreal said that's a significant achievement among AI researchers, who have been using various videogames to test their systems but have found Ms. Pac-Man among the most difficult to crack. But Precup said she was impressed not just with what the researchers achieved but with how they achieved it.


Modeling Player Engagement with Bayesian Hierarchical Models

AAAI Conferences

Modeling player engagement is a key challenge in games. However, the gameplay signatures of engaged players can be highly context-sensitive, varying based on where the game is used or what population of players is using it. Traditionally, models of player engagement are investigated in a particular context, and it is unclear how effectively these models generalize to other settings and populations. In this work, we investigate a Bayesian hierarchical linear model for multi-task learning to devise a model of player engagement from a pair of datasets that were gathered in two complementary contexts: a Classroom Study with middle school students and a Laboratory Study with undergraduate students. Both groups of players used similar versions of Crystal Island, an educational interactive narrative game for science learning. Results indicate that the Bayesian hierarchical model outperforms both pooled and context-specific models in cross-validation measures of predicting player motivation from in-game behaviors, particularly for the smaller Classroom Study group. Further, we find that the posterior distributions of model parameters indicate that the coefficient for a measure of gameplay performance significantly differs between groups. Drawing upon their capacity to share information across groups, hierarchical Bayesian methods provide an effective approach for modeling player engagement with data from similar, but different, contexts.


AI teaches itself to complete the Rubik's cube in just 20 MOVES

Daily Mail - Science & tech

A deep-learning algorithm has been developed which can solve the Rubik's cube faster than any human can. It never fails to complete the puzzle, with a 100 per cent success rate and managing it in around 20 moves. Humans can beat the AI's mark of 18 seconds, the world record is around four seconds, but it is far more inefficient and people often require around 50 moves. It was created by University of California Irvine and can be tried out here. Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration.


Rubik's cube solved in "fraction of a second" by artificial intelligence machine learning algorithm

#artificialintelligence

Researchers have developed an AI algorithm which can solve a Rubik's cube in a fraction of a second, according to a study published in the journal Nature Machine Intelligence. The system, known as DeepCubeA, uses a form of machine learning which teaches itself how to play in order to crack the puzzle without being specifically coached by humans. "Artificial intelligence can defeat the world's best human chess and Go players, but some of the more difficult puzzles, such as the Rubik's Cube, had not been solved by computers, so we thought they were open for AI approaches," Pierre Baldi, one of the developers of the algorithm and computer scientist from the University of California, Irvine, said in a statement. According to Baldi, the latest development could herald a new generation of artificial intelligence (AI) deep-learning systems which are more advanced than those used in commercially available applications such as Siri and Alexa. "These systems are not really intelligent; they're brittle, and you can easily break or fool them," Baldi said.