Snodgrass, Sam
Human-like Bots for Tactical Shooters Using Compute-Efficient Sensors
Justesen, Niels, Kaselimi, Maria, Snodgrass, Sam, Vozaru, Miruna, Schlegel, Matthew, Wingren, Jonas, Barros, Gabriella A. B., Mahlmann, Tobias, Sudhakaran, Shyam, Kerr, Wesley, Wang, Albert, Holmgård, Christoffer, Yannakakis, Georgios N., Risi, Sebastian, Togelius, Julian
Artificial intelligence (AI) has enabled agents to master complex video games, from first-person shooters like Counter-Strike to real-time strategy games such as StarCraft II and racing games like Gran Turismo. While these achievements are notable, applying these AI methods in commercial video game production remains challenging due to computational constraints. In commercial scenarios, the majority of computational resources are allocated to 3D rendering, leaving limited capacity for AI methods, which often demand high computational power, particularly those relying on pixel-based sensors. Moreover, the gaming industry prioritizes creating human-like behavior in AI agents to enhance player experience, unlike academic models that focus on maximizing game performance. This paper introduces a novel methodology for training neural networks via imitation learning to play a complex, commercial-standard, VALORANT-like 2v2 tactical shooter game, requiring only modest CPU hardware during inference. Our approach leverages an innovative, pixel-free perception architecture using a small set of ray-cast sensors, which capture essential spatial information efficiently. These sensors allow AI to perform competently without the computational overhead of traditional methods. Models are trained to mimic human behavior using supervised learning on human trajectory data, resulting in realistic and engaging AI agents. Human evaluation tests confirm that our AI agents provide human-like gameplay experiences while operating efficiently under computational constraints. This offers a significant advancement in AI model development for tactical shooter games and possibly other genres.
Procedural Content Generation via Knowledge Transformation (PCG-KT)
Sarkar, Anurag, Guzdial, Matthew, Snodgrass, Sam, Summerville, Adam, Machado, Tiago, Smith, Gillian
We introduce the concept of Procedural Content Generation via Knowledge Transformation (PCG-KT), a new lens and framework for characterizing PCG methods and approaches in which content generation is enabled by the process of knowledge transformation -- transforming knowledge derived from one domain in order to apply it in another. Our work is motivated by a substantial number of recent PCG works that focus on generating novel content via repurposing derived knowledge. Such works have involved, for example, performing transfer learning on models trained on one game's content to adapt to another game's content, as well as recombining different generative distributions to blend the content of two or more games. Such approaches arose in part due to limitations in PCG via Machine Learning (PCGML) such as producing generative models for games lacking training data and generating content for entirely new games. In this paper, we categorize such approaches under this new lens of PCG-KT by offering a definition and framework for describing such methods and surveying existing works using this framework. Finally, we conclude by highlighting open problems and directions for future research in this area.
Deep Learning for Procedural Content Generation
Liu, Jialin, Snodgrass, Sam, Khalifa, Ahmed, Risi, Sebastian, Yannakakis, Georgios N., Togelius, Julian
However, the computational creativity community combined with LVE can allow users to breed their own has identified that in order to get a full picture game levels, such as Zelda and Mario [104]. Based on of the generator (or creative program) the process by [36, 140], a mixed-initiative tile-based level design tool which the output content is created should be evaluated was implemented by Schrum et al. [103], which allows as well. Colton [11], Jordanous [56], Pease and human to interact with the evolution and exploration Colton [86] each propose frameworks and methodologies within latent level-design space (interface illustrated in for evaluating the creativity of the process of a generator. Figure 1), and to play the generated levels in real-time. Smith and Whitehead [116] (later expanded on by Summerville [125]) proposed methods for holistically EC methods can also collaborate with human to evaluating a content generation approach, by evaluating generate and evaluate or repair game content. Liapis large swaths of generated content to get a broader et al. [71] presented Sentient World tool which allows understanding of the generative space of a content generator interactions with human designers and generates game and its biases within that generative space.
Exploring Level Blending across Platformers via Paths and Affordances
Sarkar, Anurag, Summerville, Adam, Snodgrass, Sam, Bentley, Gerard, Osborn, Joseph
Techniques for procedural content generation via machine learning (PCGML) have been shown to be useful for generating novel game content. While used primarily for producing new content in the style of the game domain used for training, recent works have increasingly started to explore methods for discovering and generating content in novel domains via techniques such as level blending and domain transfer. In this paper, we build on these works and introduce a new PCGML approach for producing novel game content spanning multiple domains. We use a new affordance and path vocabulary to encode data from six different platformer games and train variational autoencoders on this data, enabling us to capture the latent level space spanning all the domains and generate new content with varying proportions of the different domains.
Multi-Domain Level Generation and Blending with Sketches via Example-Driven BSP and Variational Autoencoders
Snodgrass, Sam, Sarkar, Anurag
Procedural content generation via machine learning (PCGML) has demonstrated its usefulness as a content and game creation approach, and has been shown to be able to support human creativity. An important facet of creativity is combinational creativity or the recombination, adaptation, and reuse of ideas and concepts between and across domains. In this paper, we present a PCGML approach for level generation that is able to recombine, adapt, and reuse structural patterns from several domains to approximate unseen domains. We extend prior work involving example-driven Binary Space Partitioning for recombining and reusing patterns in multiple domains, and incorporate Variational Autoencoders (VAEs) for generating unseen structures. We evaluate our approach by blending across $7$ domains and subsets of those domains. We show that our approach is able to blend domains together while retaining structural components. Additionally, by using different groups of training domains our approach is able to generate both 1) levels that reproduce and capture features of a target domain, and 2) levels that have vastly different properties from the input domain.
Capturing Local and Global Patterns in Procedural Content Generation via Machine Learning
Volz, Vanessa, Justesen, Niels, Snodgrass, Sam, Asadi, Sahar, Purmonen, Sami, Holmgård, Christoffer, Togelius, Julian, Risi, Sebastian
Recent procedural content generation via machine learning (PCGML) methods allow learning from existing content to produce similar content automatically. While these approaches are able to generate content for different games (e.g. Super Mario Bros., DOOM, Zelda, and Kid Icarus), it is an open questions how well these approaches can capture large-scale visual patterns such as symmetry. In this paper, we propose match-three games as a domain to test PCGML algorithms regarding their ability to generate suitable patterns. We demonstrate that popular algorithm such as Generative Adversarial Networks struggle in this domain and propose adaptations to improve their performance. In particular we augment the neighborhood of a Markov Random Fields approach to not only take local but also symmetric positional information into account. We conduct several empirical tests including a user study that show the improvements achieved by the proposed modifications, and obtain promising results.
Leveraging Multi-Layer Level Representations for Puzzle-Platformer Level Generation
Snodgrass, Sam (Drexel University) | Ontañón, Santiago (Drexel University)
Procedural content generation via machine learning (PCGML) has been growing in recent years. However, many PCGML approaches are only explored in the context of linear platforming games, and focused on modeling structural level information. Previously, we developed a multi-layer level representation, where each layer is designed to capture specific level information. In this paper, we apply our multi-layer approach to Lode Runner, a game with non-linear paths and complex actions. We test our approach by generating levels for Lode Runner with a constrained multi-dimensional Markov chain (MdMC) approach that ensures playability and a standard MdMC sampling approach. We compare the levels sampled when using multi-layer representation against those sampled using the single-layer representation; we compare using both the constrained sampling algorithm and the standard sampling algorithm.
Studying the Effects of Training Data on Machine Learning-Based Procedural Content Generation
Snodgrass, Sam (Drexel University) | Summerville, Adam (University of California, Santa Cruz) | Ontanon, Santiago (Drexel University)
The exploration of Procedural Content Generation via Machine Learning (PCGML) has been growing in recent years. However, while the number of PCGML techniques and methods for evaluating PCG techniques have been increasing, little work has been done in determining how the quality and quantity of the training data provided to these techniques effects the models or the output. Therefore, little is known about how much training data would actually be needed to deploy certain PCGML techniques in practice. In this paper we explore this question by studying the quality and diversity of the output of two well-known PCGML techniques (multi-dimensional Markov chains and Long Short-term Memory Recurrent Neural Networks) in generating Super Mario Bros. levels while varying the amount and quality of the training data.
Procedural Content Generation via Machine Learning (PCGML)
Summerville, Adam, Snodgrass, Sam, Guzdial, Matthew, Holmgård, Christoffer, Hoover, Amy K., Isaksen, Aaron, Nealen, Andy, Togelius, Julian
This survey explores Procedural Content Generation via Machine Learning (PCGML), defined as the generation of game content using machine learning models trained on existing content. As the importance of PCG for game development increases, researchers explore new avenues for generating high-quality content with or without human involvement; this paper addresses the relatively new paradigm of using machine learning (in contrast with search-based, solver-based, and constructive methods). We focus on what is most often considered functional game content such as platformer levels, game maps, interactive fiction stories, and cards in collectible card games, as opposed to cosmetic content such as sprites and sound effects. In addition to using PCG for autonomous generation, co-creativity, mixed-initiative design, and compression, PCGML is suited for repair, critique, and content analysis because of its focus on modeling existing content. We discuss various data sources and representations that affect the resulting generated content. Multiple PCGML methods are covered, including neural networks, long short-term memory (LSTM) networks, autoencoders, and deep convolutional networks; Markov models, $n$-grams, and multi-dimensional Markov chains; clustering; and matrix factorization. Finally, we discuss open problems in the application of PCGML, including learning from small datasets, lack of training data, multi-layered learning, style-transfer, parameter tuning, and PCG as a game mechanic.
An Approach to Domain Transfer in Procedural Content Generation of Two-Dimensional Videogame Levels
Snodgrass, Sam (Drexel University) | Ontanon, Santiago (Drexel University)
Statistical models, such as Markov Chains, have been recently studied in the context of procedural content generation (PCG). These models can capture statistical regularities of a set of training data and use them to sample new content. However, these techniques assume the existence of sufficient training data with which to train the models. In this paper we study the setting in which we might not have enough training data from the target domain, but we have ample training data from another, similar domain. We propose an algorithm to discover a mapping between domains, so that out-of-domain training data can be used to train the statistical model. Specifically, we apply this to two-dimensional level generation, and experiment with three classic video games: Super Mario Bros., Kid Icarus and Kid Kool.