This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision-making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academia and industry, finding the academic research heavily focused on creating game-winning agents, while the indus- try aims to maximise player enjoyment. It finds the industry adoption of academic research is low because it is either in- applicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investi- gation: bridging the gap between academia and industry. Fi- nally, the areas of spatial reasoning, multi-scale AI, and co- operation are found to require future work, and standardised evaluation methods are proposed to produce comparable re- sults between studies.
Video games are complex simulation environments with many real-world properties that need to be addressed in order to build robust intelligence. In particular, real-time strategy games provide a multi-scale challenge which requires both deliberative and reactive reasoning processes. Experts approach this task by studying a corpus of games, building models for anticipating opponent actions, and practicing within the game environment. We motivate the need for integrating heterogeneous approaches by enumerating a range of competencies involved in gameplay and discuss how they are being implemented in EISBot, a reactive planning agent that we have applied to the task of playing real-time strategy games at the same granularity as humans.
In order to experiment with machine learning and data mining techniques in the domain of Real-Time Strategy games such as StarCraft, a dataset is required that captures the complex detail of the interactions taking place between the players and the game. This paper describes a new extraction process by which game data is extracted both directly from game log (replay) files, and indirectly through simulating the replays within the StarCraft game engine. Data is then stored in a compact, hierarchical, and easily accessible format. This process is applied to a collection of expert replays, creating a new standardised dataset. The data recorded is enough for almost the complete game state to be reconstructed, from either player's viewpoint, at any point in time (to the nearest second). This process has revealed issues in some of the source replay files, as well as discrepancies in prior datasets. Where practical, these errors have been removed in order to produce a higher-quality reusable dataset.
This document summarises my research in the area of Real-Time Strategy (RTS) video game Artificial Intelligence (AI). The main objective of this research is to increase the quality of AI used in commercial RTS games, which has seen little improvement over the past decade. This objective will be addressed by investigating the use of a learning by observation, case-based reasoning agent, which can be applied to new RTS games with minimal development effort. To be successful, this agent must compare favourably with standard commercial RTS AI techniques: it must be easier to apply, have reasonable resource requirements, and produce a better player. Currently, a prototype implementation has been produced for the game StarCraft, and it has demonstrated the need for processing large sets of input data into a more concise form for use at run-time.
Goal-driven autonomy (GDA) is a conceptual model for creating an autonomous agent that monitors a set of expectations during plan execution, detects when discrepancies occur, builds explanations for the cause of failures, and formulates new goals to pursue when planning failures arise. While this framework enables the development of agents that can operate in complex and dynamic environments, implementing the logic for each of the subtasks in the model requires substantial domain engineering. We present a method using case-based reasoning and intent recognition in order to build GDA agents that learn from demonstrations. Our approach reduces the amount of domain engineering necessary to implement GDA agents and learns expectations, explanations, and goals from expert demonstrations. We have applied this approach to build an agent for the real-time strategy game StarCraft. Our results show that integrating the GDA conceptual model into the agent greatly improves its win rate.