Cooper, Seth
Guided Game Level Repair via Explainable AI
Bazzaz, Mahsa, Cooper, Seth
Procedurally generated levels created by machine learning models can be unsolvable without further editing. Various methods have been developed to automatically repair these levels by enforcing hard constraints during the post-processing step. However, as levels increase in size, these constraint-based repairs become increasingly slow. This paper proposes using explainability methods to identify specific regions of a level that contribute to its unsolvability. By assigning higher weights to these regions, constraint-based solvers can prioritize these problematic areas, enabling more efficient repairs. Our results, tested across three games, demonstrate that this approach can help to repair procedurally generated levels faster.
Controllable Game Level Generation: Assessing the Effect of Negative Examples in GAN Models
Bazzaz, Mahsa, Cooper, Seth
Generative Adversarial Networks (GANs) are unsupervised models designed to learn and replicate a target distribution. The vanilla versions of these models can be extended to more controllable models. Conditional Generative Adversarial Networks (CGANs) extend vanilla GANs by conditioning both the generator and discriminator on some additional information (labels). Controllable models based on complementary learning, such as Rumi-GAN, have been introduced. Rumi-GANs leverage negative examples to enhance the generator's ability to learn positive examples. We evaluate the performance of two controllable GAN variants, CGAN and Rumi-GAN, in generating game levels targeting specific constraints of interest: playability and controllability. This evaluation is conducted under two scenarios: with and without the inclusion of negative examples. The goal is to determine whether incorporating negative examples helps the GAN models avoid generating undesirable outputs. Our findings highlight the strengths and weaknesses of each method in enforcing the generation of specific conditions when generating outputs based on given positive and negative examples.
Latent Combinational Game Design
Sarkar, Anurag, Cooper, Seth
We present latent combinational game design -- an approach for generating playable games that blend a given set of games in a desired combination using deep generative latent variable models. We use Gaussian Mixture Variational Autoencoders (GMVAEs) which model the VAE latent space via a mixture of Gaussian components. Through supervised training, each component encodes levels from one game and lets us define blended games as linear combinations of these components. This enables generating new games that blend the input games as well as controlling the relative proportions of each game in the blend. We also extend prior blending work using conditional VAEs and compare against the GMVAE and additionally introduce a hybrid conditional GMVAE (CGMVAE) architecture which lets us generate whole blended levels and layouts. Results show that these approaches can generate playable games that blend the input games in specified combinations. We use both platformers and dungeon-based games to demonstrate our results.
Mechanic Maker 2.0: Reinforcement Learning for Evaluating Generated Rules
Gonzalez, Johor Jara, Cooper, Seth, Guzdial, Matthew
Automated game design (AGD), the study of automatically generating game rules, has a long history in technical games research. AGD approaches generally rely on approximations of human play, either objective functions or AI agents. Despite this, the majority of these approximators are static, meaning they do not reflect human player's ability to learn and improve in a game. In this paper, we investigate the application of Reinforcement Learning (RL) as an approximator for human play for rule generation. We recreate the classic AGD environment Mechanic Maker in Unity as a new, open-source rule generation framework. Our results demonstrate that RL produces distinct sets of rules from an A* agent baseline, which may be more usable by humans.
Active Learning for Classifying 2D Grid-Based Level Completability
Bazzaz, Mahsa, Cooper, Seth
Determining the completability of levels generated by procedural generators such as machine learning models can be challenging, as it can involve the use of solver agents that often require a significant amount of time to analyze and solve levels. Active learning is not yet widely adopted in game evaluations, although it has been used successfully in natural language processing, image and speech recognition, and computer vision, where the availability of labeled data is limited or expensive. In this paper, we propose the use of active learning for learning level completability classification. Through an active learning approach, we train deep-learning models to classify the completability of generated levels for Super Mario Bros., Kid Icarus, and a Zelda-like game. We compare active learning for querying levels to label with completability against random queries. Our results show using an active learning approach to label levels results in better classifier performance with the same amount of labeled data.
Game Level Blending using a Learned Level Representation
Atmakuri, Venkata Sai Revanth, Cooper, Seth, Guzdial, Matthew
Game level blending via machine learning, the process of combining features of game levels to create unique and novel game levels using Procedural Content Generation via Machine Learning (PCGML) techniques, has gained increasing popularity in recent years. However, many existing techniques rely on human-annotated level representations, which limits game level blending to a limited number of annotated games. Even with annotated games, researchers often need to author an additional shared representation to make blending possible. In this paper, we present a novel approach to game level blending that employs Clustering-based Tile Embeddings (CTE), a learned level representation technique that can serve as a level representation for unannotated games and a unified level representation across games without the need for human annotation. CTE represents game level tiles as a continuous vector representation, unifying their visual, contextual, and behavioral information. We apply this approach to two classic Nintendo games, Lode Runner and The Legend of Zelda. We run an evaluation comparing the CTE representation to a common, human-annotated representation in the blending task and find that CTE has comparable or better performance without the need for human annotation.
Level Assembly as a Markov Decision Process
Biemer, Colan F., Cooper, Seth
Many games feature a progression of levels that doesn't adapt to the player. This can be problematic because some players may get stuck if the progression is too difficult, while others may find it boring if the progression is too slow to get to more challenging levels. This can be addressed by building levels based on the player's performance and preferences. In this work, we formulate the problem of generating levels for a player as a Markov Decision Process (MDP) and use adaptive dynamic programming (ADP) to solve the MDP before assembling a level. We tested with two case studies and found that using an ADP outperforms two baselines. Furthermore, we experimented with player proxies and switched them in the middle of play, and we show that a simple modification prior to running ADP results in quick adaptation. By using ADP, which searches the entire MDP, we produce a dynamic progression of levels that adapts to the player.
A Reasoning Engine for the Gamification of Loop-Invariant Discovery
Walter, Andrew, Cooper, Seth, Manolios, Panagiotis
We describe the design and implementation of a reasoning engine that facilitates the gamification of loop-invariant discovery. Our reasoning engine enables students, computational agents and regular software engineers with no formal methods expertise to collaboratively prove interesting theorems about simple programs using browser-based, online games. Within an hour, players are able to specify and verify properties of programs that are beyond the capabilities of fully-automated tools. The hour limit includes the time for setting up the system, completing a short tutorial explaining game play and reasoning about simple imperative programs. Players are never required to understand formal proofs; they only provide insights by proposing invariants. The reasoning engine is responsible for managing and evaluating the proposed invariants, as well as generating actionable feedback.
Procedural Content Generation using Behavior Trees (PCGBT)
Sarkar, Anurag, Cooper, Seth
Behavior trees (BTs) are a popular method of modeling the behavior of NPCs and enemy AI and have found widespread use in a large number of commercial games. In this paper, rather than use BTs to model game-playing agents, we demonstrate their use for modeling game design agents, defining behaviors as executing content generation tasks rather than in-game actions. Similar to how traditional BTs enable modeling behaviors in a modular and dynamic manner, BTs for PCG enable simple subtrees for generating parts of levels to be combined modularly to form more complex trees for generating whole levels as well as generators that can dynamically vary the generated content. We demonstrate this approach by using BTs to model generators for Super Mario Bros., Mega Man and Metroid levels as well as dungeon layouts and discuss several ways in which this PCGBT paradigm could be applied and extended in the future.
Generating and Blending Game Levels via Quality-Diversity in the Latent Space of a Variational Autoencoder
Sarkar, Anurag, Cooper, Seth
Several recent works have demonstrated the use of variational autoencoders (VAEs) for both generating levels in the style of existing games as well as blending levels across different games. Additionally, quality-diversity (QD) algorithms have also become popular for generating varied game content by using evolution to explore a search space while focusing on both variety and quality. In order to reap the benefits of both these approaches, we present a level generation and game blending approach that combines the use of VAEs and QD algorithms. Specifically, we train VAEs on game levels and then run the MAP-Elites QD algorithm using the learned latent space of the VAE as the search space. The latent space captures the properties of the games whose levels we want to generate and blend, while MAP-Elites searches this latent space to find a diverse set of levels optimizing a given objective such as playability. We test our method using models for 5 different platformer games as well as a blended domain spanning 3 of these games. Our results show that using MAP-Elites in conjunction with VAEs enables the generation of a diverse set of playable levels not just for each individual game but also for the blended domain while illuminating game-specific regions of the blended latent space.