In this paper we propose a new algorithm for solving general two-player turn-taking games that performs symbolic search utilizing binary decision diagrams (BDDs). It consists of two stages: First, it determines all breadth-first search (BFS) layers using forward search and omitting duplicate detection, next, the solving process operates in backward direction only within these BFS layers thereby partitioning all BDDs according to the layers the states reside in. We provide experimental results for selected games and compare to a previous approach. This comparison shows that in most cases the new algorithm outperforms the existing one in terms of runtime and used memory so that it can solve games that could not be solved before with a general approach.
AI Magazine is an official publication of the Association for the Advancement of Artificial Intelligence (AAAI). It is published four times each year in fall, winter, spring, and summer issues, and is sent to all members of the Association and subscribed to by most research libraries. Back issues are available on-line (issues less than 18 months old are only available to AAAI members). The purpose of AI Magazine is to disseminate timely and informative expository articles that represent the current state of the art in AI and to keep its readers posted on AAAI-related matters. The articles are selected for appeal to readers engaged in research and applications across the broad spectrum of AI.
Kantharaju, Pavan (Drexel University) | Alderfer, Katelyn (Drexel University) | Zhu, Jichen (Drexel University) | Char, Bruce (Drexel University) | Smith, Brian (Drexel University) | Ontanon, Santiago (Drexel University)
This paper focuses on tracing player knowledge in educational games. Specifically, given a set of concepts or skills required to master a game, the goal is to estimate the likelihood with which the current player has mastery of each of those concepts or skills. The main contribution of the paper is an approach that integrates machine learning and domain knowledge rules to find when the player applied a certain skill and either succeeded or failed. This is then given as input to a standard knowledge tracing module (such as those from Intelligent Tutoring Systems) to perform knowledge tracing. We evaluate our approach in the context of an educational game called Parallel to teach parallel and concurrent programming with data collected from real users, showing our approach can predict students skills with a low mean-squared error.
The process of play testing a game is subjective, expensive and incomplete. In this paper, we present a play-testing approach that explores the game space with automated agents and collects data to answer questions posed by the designers. Rather than have agents interacting with an actual game client, this approach recreates the bare bone mechanics of the game as a separate system. Our agent is able to play in minutes what would take testers days of organic gameplay. The analysis of thousands of game simulations exposed imbalances in game actions, identified inconsequential rewards and evaluated the effectiveness of optional strategic choices. Our test case game, The Sims Mobile, was recently released and the findings shown here influenced design changes that resulted in improved player experience.
In this paper we present an approach to using sequence analysis to model player behavior. This approach is designed to work in game development contexts, integrating production teams and delivering profiles that inform game design. We demonstrate the method via a case study of the game T om Clancy’s The Division, which with its 20 million players represents a major current commercial title. The approach presented provides a mixed-methods framework, combining qualitative knowledge elicitation and workshops with large-scale telemetry analysis, using sequence mining and clustering to develop detailed player profiles showing the core game-play loops of The Division’s players.
In order to create well-crafted learning progressions, designers guide players as they present game skills and give ample time for the player to master those skills. However, analyzing the quality of learning progressions is challenging, especially during the design phase, as content is ever-changing. This research presents the application of Stratabots — automated player simulations based on models of players with varying sets of skills — to the human computation game Foldit. Stratabot performance analysis coupled with player data reveals a relatively smooth learning progression within tutorial levels, yet still shows evidence for improvement. Leveraging existing general gameplaying algorithms such as Monte Carlo Evaluation can reduce the development time of this approach to automated playtesting without losing predicitive power of the player model.
Beaupre, Spencer (Worcester Polytechnic Institute) | Wiles, Thomas (Worcester Polytechnic Institute) | Briggs, Sean (Worcester Polytechnic Institute) | Smith, Gillian (Worcester Polytechnic Institute)
Existing approaches to multi-game level generation rely upon level structure to emerge organically via level fitness. In this paper, we present a method for generating levels for games in the GVGAI framework using a design pattern-based approach, where design patterns are derived from an analysis of the existing corpus of GVGAI game levels. We created two new generators: one constructive, and one search-based, and compared them to a prior existing search-based generator. Results show that our generator is comparable, even preferred, over the prior generator, especially among players with existing game experience. Our search-based generator also outperforms our constructive generator in terms of player preference.
Aytemiz, Batu (University of California, Santa Cruz) | Karth, Isaac (University of California, Santa Cruz) | Harder, Jesse (University of California, Santa Cruz) | Smith, Adam M. (University of California, Santa Cruz) | Whitehead, Jim (University of California, Santa Cruz)
Most tutorials in video games do not consider the skill level of the player when deciding what information to present. This makes many tutorials either tedious for experienced players or not informative enough for players who are new to the given genre. With Talin, implemented as an asset in the Unity game engine, we make it possible to create a mastery model of an individual player's skill levels by operationalizing Dan Cook's skill atom theory. We propose that using this mastery model opens up a new design space when it comes to designing tutorials. We show an example tutorial implementation with Talin assembled using only graphical components provided by our framework, without the need of writing any code. The dynamic tutorial implementation results in the player receiving information only when they need it, whenever they need it. While the novice player is given all the information they need to learn the system, the expert player is not bogged down by tooltip pop-ups regarding mechanics they have already mastered.
Procedural Content Generation (PCG) has been a part of video games for the majority of their existence and have been an area of active research over the past decade. How- ever, despite the interest in PCG there is no commonly ac- cepted methodology for assessing and analyzing a generator. Furthermore, the recent trend towards machine learned PCG techniques commonly state the goal of learning the design within the original content, but there has been little assess- ment of whether these techniques actually achieve this goal. This paper presents a number of techniques for the assess- ment and analysis of PCG systems, allowing practitioners and researchers better insight into the strengths and weaknesses of these systems, allowing for better comparison of systems, and reducing the reliance on ad-hoc, cherry-picking-prone tech- niques.