If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Many AI researchers want to test the utility of their prototypes in complex task environments, such as those defined by commercial gaming simulators. Also, many developers of commercial games need to solve tasks (e.g., game balancing, providing rational agent behaviors) that can be addressed by these systems. However, integrating them with gaming simulators requires substantial effort. We will demonstrate TIELT, a testbed designed to assist with evaluating research prototypes in these task environments.
Real-Time Strategy games are among the most popular genres of commercial PC games, and also have widely applicable analogs in the field of "Serious Games" such as military simulations, city planning, and other forms of simulation involving multi-agent coordination and an underlying economy. One of the core tasks in playing a traditional Real-Time Strategy game is building a base in an effective manner and defending it well. Creating an AI that can construct a successful wall was one of the more challenging areas of development on Empire Earth II, as building a wall requires analysis of the terrain and techniques from computational geometry. An effective wall can hold off enemy troops and keep battles away from the delicate economy inside the base.
To save precious time and space, many games and simulations use static terrain and fixed (or random) reconstruction of areas that a player leaves and later revisits. This can result in noticeable differences between the reconstructed area and the player's recollections (or expectations). These differences can lessen a player's immersion in the game, or the usefulness of the simulation. We propose an approach for environment reconstruction that uses a Bayesian Network to quickly and easily calculate likely effects that external factors have on the environment. The reconstruction of revisited areas becomes less disconcerting and permits the incorporation of plausible changes based on unobserved, yet reasonably expected, events that could have occurred during the player's absence.
In declarative optimization-based drama management (DODM), a game's story is abstracted as a sequence of plot points; possible drama manager interventions are abstracted as a set of DM actions. The author defines an function evaluating story quality, and some optimization method (currently reinforcement learning) chooses DM actions so as to maximize expected story quality according to that evaluation function. While previous work has developed this approach at a technical level and discussed its potential applications, no work to date has used DODM to write real games. We report on our experiences designing a game in the Neverwinter Nights engine, entitled The Guilty, in which we use DODM to create a dynamic plot that in a previous design iteration we had found difficult to create with other techniques.
At a time when players are always expecting more realistic game worlds, we propose a technique aiming at creating interesting and appealing societies of multicultural and selfadapting NPCs. We define the "multicultural" concept as the ability for our NPCs to belong to different kinds of social groups, each group adopting a distinct strategy, permitting their members to adapt themselves to the world. Indeed, we are convinced this diversity can raise realistic worlds. In order to create NPCs that can live in such worlds, a simple solution would be to add manually a large number of realistic interactions between NPCs and objects of the world. However, this process can lead to the creation of enormous finite state machines that are difficult to manage and maintain.
This paper investigates the challenges posed by the application of reinforcement learning to large-scale strategy games. In this context, we present steps and techniques which synthesize new ideas with state-of-the-art techniques from several areas of machine learning in a novel integrated learning approach for this kind of games. The performance of the approach is demonstrated on the task of learning valuable game strategies for a commercial wargame.
In this document, we describe our work applying natural language (NL) technologies to improve non-player character (NPC) dialog interactions in games, specifically roleplaying games (RPGs). Our approach is to adapt the standard dialog menu interaction so that the menu items are dynamically-generated during game runtime rather than scripted during development time. In our system, menu items are constructed by manipulating abstract semantic representations stored in the NPC knowledgebase, converting them into NL text, and then ranking them so that the most relevant items are placed at the top of the menu. We demonstrate our approach in the context of a small RPG.
In this paper we introduce the Biased Cost Pathfinding (BCP) algorithm as a simple yet effective meta-algorithm that can be fused with any single-agent search method in order to make it usable in multi-agent environments. In particular, we focus on pathfinding problems common in real-time strategy games where units can have different functions and mission priorities. We evaluate BCP paired with the A* algorithm in several game-like scenarios. Performance improvement of up to 90% is demonstrated with respect to several metrics.
Although other genres have used procedural level generation to extend gameplay and replayability, platformer games have not yet seen successful level generation. This paper proposes a new four-layer hierarchy to represent platform game levels, with a focus on representing repetition, rhythm, and connectivity. It also proposes a way to use this model to procedurally generate new levels.