Silva, Fernando de Mesentier
Automated Playtesting of Matching Tile Games
Mugrai, Luvneesh, Silva, Fernando de Mesentier, Holmgård, Christoffer, Togelius, Julian
Matching tile games are an extremely popular game genre. Arguably the most popular iteration, Match-3 games, are simple to understand puzzle games, making them great benchmarks for research. In this paper, we propose developing different procedural personas for Match-3 games in order to approximate different human playstyles to create an automated playtesting system. The procedural personas are realized through evolving the utility function for the Monte Carlo Tree Search agent. We compare the performance and results of the evolution agents with the standard Vanilla Monte Carlo Tree Search implementation as well as to a random move-selection agent. We then observe the impacts on both the game's design and the game design process. Lastly, a user study is performed to compare the agents to human play traces.
The Many AI Challenges of Hearthstone
Hoover, Amy K., Togelius, Julian, Lee, Scott, Silva, Fernando de Mesentier
Games have benchmarked AI methods since of a single game, discovering a few new variations on the inception of the field, with classic board games such existing research topics. The set of · Deckbuilding · Gameplaying · Player Modeling AI problems associated with video games has in recent decades expanded from simply playing games to win, to playing games in particular styles, generating game content, 1 Introduction modeling players etc. Different games pose very different challenges for AI systems, and several different For decades classic board games such as Chess, Checkers, AI challenges can typically be posed by the same and Go have dominated the landscape of AI and game. In this article we analyze the popular collectible games research. Often called the "drosophila of AI" in card game Hearthstone (Blizzard 2014) and describe reference to the drosophila fly's significance in biological a varied set of interesting AI challenges posed by this research, Chess in particular has been the subject game. Collectible card games are relatively understudied of hundreds of academic papers and decades of research in the AI community, despite their popularity and [18]. At the core of many of these approaches is designing the interesting challenges they pose. Analyzing a single algorithms to beat top human players. However, game in-depth in the manner we do here allows us to despite IBM's Deep Blue defeating Garry Kasparov in see the entire field of AI and Games through the lens the 1997 World Chess Championships and DeepMind's AlphaGo defeating Lee Sedol in the 2016 Google Deep-Mind Challenge Match [47], such programs have yet While there is value in designing algorithms to win (e.g.
Evolving the Hearthstone Meta
Silva, Fernando de Mesentier, Canaan, Rodrigo, Lee, Scott, Fontaine, Matthew C., Togelius, Julian, Hoover, Amy K.
Balancing an ever growing strategic game of high complexity, such as Hearthstone is a complex task. The target of making strategies diverse and customizable results in a delicate intricate system. Tuning over 2000 cards to generate the desired outcome without disrupting the existing environment becomes a laborious challenge. In this paper, we discuss the impacts that changes to existing cards can have on strategy in Hearthstone. By analyzing the win rate on match-ups across different decks, being played by different strategies, we propose to compare their performance before and after changes are made to improve or worsen different cards. Then, using an evolutionary algorithm, we search for a combination of changes to the card attributes that cause the decks to approach equal, 50% win rates. We then expand our evolutionary algorithm to a multi-objective solution to search for this result, while making the minimum amount of changes, and as a consequence disruption, to the existing cards. Lastly, we propose and evaluate metrics to serve as heuristics with which to decide which cards to target with balance changes.
Winning Isn't Everything: Training Human-Like Agents for Playtesting and Game AI
Zhao, Yunqi, Borovikov, Igor, Beirami, Ahmad, Rupert, Jason, Somers, Caedmon, Harder, Jesse, Silva, Fernando de Mesentier, Kolen, John, Pinto, Jervis, Pourabolghasem, Reza, Chaput, Harold, Pestrak, James, Sardari, Mohsen, Lin, Long, Aghdaie, Navid, Zaman, Kazi
Recently, there have been several high-profile achievements of agents learning to play games against humans and beat them. We consider an alternative approach that instead addresses game design for a better player experience by training human-like game agents. Specifically, we study the problem of training game agents in service of the development processes of the game developers that design, build, and operate modern games. We highlight some of the ways in which we think intelligent agents can assist game developers to understand their games, and even to build them. Our early results using the proposed agent framework mark a few steps toward addressing the unique challenges that game developers face.
Exploring Gameplay With AI Agents
Silva, Fernando de Mesentier, Borovikov, Igor, Kolen, John, Aghdaie, Navid, Zaman, Kazi
The process of playtesting a game is subjective, expensive and incomplete. In this paper, we present a playtesting approach that explores the game space with automated agents and collects data to answer questions posed by the designers. Rather than have agents interacting with an actual game client, this approach recreates the bare bone mechanics of the game as a separate system. Our agent is able to play in minutes what would take testers days of organic gameplay. The analysis of thousands of game simulations exposed imbalances in game actions, identified inconsequential rewards and evaluated the effectiveness of optional strategic choices. Our test case game, The Sims Mobile, was recently released and the findings shown here influenced design changes that resulted in improved player experience.