In order to create well-crafted learning progressions, designers guide players as they present game skills and give ample time for the player to master those skills. However, analyzing the quality of learning progressions is challenging, especially during the design phase, as content is ever-changing. This research presents the application of Stratabots — automated player simulations based on models of players with varying sets of skills — to the human computation game Foldit. Stratabot performance analysis coupled with player data reveals a relatively smooth learning progression within tutorial levels, yet still shows evidence for improvement. Leveraging existing general gameplaying algorithms such as Monte Carlo Evaluation can reduce the development time of this approach to automated playtesting without losing predicitive power of the player model.
Beaupre, Spencer (Worcester Polytechnic Institute) | Wiles, Thomas (Worcester Polytechnic Institute) | Briggs, Sean (Worcester Polytechnic Institute) | Smith, Gillian (Worcester Polytechnic Institute)
Existing approaches to multi-game level generation rely upon level structure to emerge organically via level fitness. In this paper, we present a method for generating levels for games in the GVGAI framework using a design pattern-based approach, where design patterns are derived from an analysis of the existing corpus of GVGAI game levels. We created two new generators: one constructive, and one search-based, and compared them to a prior existing search-based generator. Results show that our generator is comparable, even preferred, over the prior generator, especially among players with existing game experience. Our search-based generator also outperforms our constructive generator in terms of player preference.
Aytemiz, Batu (University of California, Santa Cruz) | Karth, Isaac (University of California, Santa Cruz) | Harder, Jesse (University of California, Santa Cruz) | Smith, Adam M. (University of California, Santa Cruz) | Whitehead, Jim (University of California, Santa Cruz)
Most tutorials in video games do not consider the skill level of the player when deciding what information to present. This makes many tutorials either tedious for experienced players or not informative enough for players who are new to the given genre. With Talin, implemented as an asset in the Unity game engine, we make it possible to create a mastery model of an individual player's skill levels by operationalizing Dan Cook's skill atom theory. We propose that using this mastery model opens up a new design space when it comes to designing tutorials. We show an example tutorial implementation with Talin assembled using only graphical components provided by our framework, without the need of writing any code. The dynamic tutorial implementation results in the player receiving information only when they need it, whenever they need it. While the novice player is given all the information they need to learn the system, the expert player is not bogged down by tooltip pop-ups regarding mechanics they have already mastered.
Procedural Content Generation (PCG) has been a part of video games for the majority of their existence and have been an area of active research over the past decade. How- ever, despite the interest in PCG there is no commonly ac- cepted methodology for assessing and analyzing a generator. Furthermore, the recent trend towards machine learned PCG techniques commonly state the goal of learning the design within the original content, but there has been little assess- ment of whether these techniques actually achieve this goal. This paper presents a number of techniques for the assess- ment and analysis of PCG systems, allowing practitioners and researchers better insight into the strengths and weaknesses of these systems, allowing for better comparison of systems, and reducing the reliance on ad-hoc, cherry-picking-prone tech- niques.
Summerville, Adam (California State Polytechnic University, Pomona ) | Martens, Chris (North Carolina State University) | Samuel, Ben (University of New Orleans) | Osborn, Joseph (Pomona College) | Wardrip-Fruin, Noah (University of California, Santa Cruz) | Mateas, Michael (University of California, Santa Cruz)
Current approaches to game generation don't understand the games they generate. As a result, even the most sophisticated systems in this regard, e.g., Game-o-Matic, betray this problem — generating games with goals that are at odds with their mechanics. We describe Gemini, the first bidirectional game generation and analysis system. Gemini is able to take games as input, perform a proceduralist reading of them, and produce possible interpretations that the games might afford. By utilizing the declarative nature of Answer Set Programming (ASP), this analysis pathway opens up generation of games targeting specific interpretations and makes it possible to ensure the generated games are consistent with the desired reading. For Gemini, we developed a game specification language capable of expressing a larger domain of games than is possible with VGDL, the most widespread representation. We demonstrate the generality of our approach by generating games in a series of domains. These domains are based on prototypes hand-created by a team without knowledge of the constraints and capabilities of Gemini.
We present a new way to represent and understand experience managers - AI agents that tune the parameters of a running game to pursue a designer's goal. Existing representations of AI managers are diverse, which complicates the task of drawing useful comparisons between them. Contrary to previous representations, ours uses a point of unity as its basis: that every game/manager pair can be viewed as only a game with the manager embedded inside. From this basis, we show that several common, differently-represented concepts of experience management can be re-expressed in a unified way. We demonstrate our new representation concretely by comparing two different representations, Search-Based Drama Management and Generalized Experience Management, and we present the insights that we have gained from this effort.
A significant amount of work has advocated that Learning from Demonstration (LfD) is a promising approach to allow end-users to create behaviors for in-game characters without requiring programming. However, one major problem with this approach is that many LfD algorithms require large amounts of training data, and thus are not practical for learning from human demonstrators. In this paper, we focus on LfD with limited training data, and specifically on the problem of active LfD where the demonstrators are human. We present the results of a user study in comparing SALT, a new active LfD approach, versus a previous state-of-the-art Active LfD algorithm, showing that SALT significantly outperforms it when learning from a limited amount of data in the context of learning to play a puzzle video game.
Lee, Dennis (University of California, Berkeley) | Tang, Haoran (University of California, Berkeley) | Zhang, Jeffrey O. (University of California, Berkeley) | Xu, Huazhe (University of California, Berkeley) | Darrell, Trevor (University of California, Berkeley) | Abbeel, Pieter (University of California, Berkeley)
We present a novel modular architecture for StarCraft II AI. The architecture splits responsibilities between multiple modules that each control one aspect of the game, such as build-order selection or tactics. A centralized scheduler reviews macros suggested by all modules and decides their order of execution. An updater keeps track of environment changes and instantiates macros into series of executable actions. Modules in this framework can be optimized independently or jointly via human design, planning, or reinforcement learning. We present the first result of applying deep reinforcement learning techniques to training two out of six modules of a modular agent with self-play, achieving 92% or 86% win rates against the "Harder" (level 5) built-in Blizzard bot in Zerg vs. Zerg matches, with or without fog-of-war.
Intelligent autonomous agents that are acting in dynamic environmentsin real-time are often required to follow long-termstrategies while also remaining reactive and being able to actdeliberately. In order to create intelligent behaviors for videogame characters, there are two common approaches – plannersare used for long-term strategical planning, whereas BehaviorTrees allow for reactive acting. Although both methodologieshave their advantages, when used on their own, theyfail to fully achieve both requirements described above. Inthis work, we propose a hybrid approach combining a HierarchicalTask Network planner for high-level planning whiledelegating low-level decision making and acting to BehaviorTrees. Furthermore, we compare this approach with a pureplanner in a multi-agent environment.
Mixed-initiative PCG systems provide a way to leverage the expressive power of algorithmic techniques for content generation in a manner that lowers the technical barrier for content creators. While these tools are a proof of concept of how PCG systems can aide aspiring designers reach their vision, there are issues pertaining capturing designer intent, and interface complexity. In this paper we introduce CADI (Conversational Assistive Design Interface) a mixed initiative PCG system for creating variations of the game Pong that utilizes natural language input through a natural language interface to explore the design space of Pong variations. We provide a motivation for the creation of CADI and discuss the implementation and design decisions taken to address the issues of designer intent and interface complexity in mixed-initiative PCG systems.