Goto

Collaborating Authors

 Weber, Ben George


Learning from Demonstration for Goal-Driven Autonomy

AAAI Conferences

Goal-driven autonomy (GDA) is a conceptual model for creating an autonomous agent that monitors a set of expectations during plan execution, detects when discrepancies occur, builds explanations for the cause of failures, and formulates new goals to pursue when planning failures arise. While this framework enables the development of agents that can operate in complex and dynamic environments, implementing the logic for each of the subtasks in the model requires substantial domain engineering. We present a method using case-based reasoning and intent recognition in order to build GDA agents that learn from demonstrations. Our approach reduces the amount of domain engineering necessary to implement GDA agents and learns expectations, explanations, and goals from expert demonstrations. We have applied this approach to build an agent for the real-time strategy game StarCraft. Our results show that integrating the GDA conceptual model into the agent greatly improves its win rate.


Building Human-Level AI for Real-Time Strategy Games

AAAI Conferences

Video games are complex simulation environments with many real-world properties that need to be addressed in order to build robust intelligence. In particular, real-time strategy games provide a multi-scale challenge which requires both deliberative and reactive reasoning processes. Experts approach this task by studying a corpus of games, building models for anticipating opponent actions, and practicing within the game environment. We motivate the need for integrating heterogeneous approaches by enumerating a range of competencies involved in gameplay and discuss how they are being implemented in EISBot, a reactive planning agent that we have applied to the task of playing real-time strategy games at the same granularity as humans.


A Particle Model for State Estimation in Real-Time Strategy Games

AAAI Conferences

A big challenge for creating human-level game AI is building agents capable of operating in imperfect information environments. In real-time strategy games the technological progress of an opponent and locations of enemy units are partially observable. To overcome this limitation, we explore a particle-based approach for estimating the location of enemy units that have been encountered. We represent state estimation as an optimization problem, and automatically learn parameters for the particle model by mining a corpus of expert StarCraft replays. The particle model tracks opponent units and provides conditions for activating tactical behaviors in our StarCraft bot. Our results show that incorporating a learned particle model improves the performance of EISBot by 10% over baseline approaches.


Applying Goal-Driven Autonomy to StarCraft

AAAI Conferences

One of the main challenges in game AI is building agents that can intelligently react to unforeseen game situations. In real-time strategy games, players create new strategies and tactics that were not anticipated during development. In order to build agents capable of adapting to these types of events, we advocate the development of agents that reason about their goals in response to unanticipated game events. This results in a decoupling between the goal selection and goal execution logic in an agent. We present a reactive planning implementation of the Goal-Driven Autonomy conceptual model and demonstrate its application in StarCraft. Our system achieves a win rate of 73% against the built-in AI and outranks 48% of human players on a competitive ladder server.