Human-Agent Teaming as a Common Problem for Goal Reasoning

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


A Review of Real-Time Strategy Game AI

AI Magazine

It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies. AI has notably been applied to board games, such as chess, Scrabble, and backgammon, creating competition that has sped the development of many heuristicbased search techniques (Schaeffer 2001).


A Review of Real-Time Strategy Game AI

AI Magazine

This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision-making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academia and industry, finding the academic research heavily focused on creating game-winning agents, while the indus- try aims to maximise player enjoyment. It finds the industry adoption of academic research is low because it is either in- applicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investi- gation: bridging the gap between academia and industry. Fi- nally, the areas of spatial reasoning, multi-scale AI, and co- operation are found to require future work, and standardised evaluation methods are proposed to produce comparable re- sults between studies.


A Comparison of Case Acquisition Strategies for Learning from Observations of State-Based Experts

AAAI Conferences

This paper focuses on case acquisition strategies in the context of Case-based Learning from Observation (CBLfO). In Learning from Observation (LfO), a system learns behaviors by observing an expert rather than being explicitly programmed. Specifically, we focus on the problem of learning behaviors from experts that reason using internal state information, that is, information that can not be directly observed. The unobservability of this state information means that the behaviors can not be represented by a simple perception-to-action mapping. We propose a new case acquisition strategy called "Similarity-based Chunking", and compare it with existing strategies to address this problem. Additionally, since standard classification accuracy in predicting the expert's actions is known to be a poor measure for evaluating LfO systems, we propose a new evaluation procedure based on two complementary metrics: behavior performance and similarity with the expert.


Integrating Reinforcement Learning into a Programming Language

AAAI Conferences

Creating artificial intelligent agents that are high-fidelity simulations of natural agents will require the engagement of behavioral scientists. However, agent programming systems that are accessible to behavioral scientists are too limited to create rich agents, and systems for creating rich agents are accessible mainly to computer scientists, not behavioral scientists. We are solving this problem by engaging behavioral scientists in the design of a programming language, and integrating reinforcement learning into the programming language. This strategy will help our language achieve adaptivity, modularity, and, most importantly, accessibility to behavioral scientists. In addition to allowing behavioral scientist to write rich agent programs, our language — AFABL (A Friendly Behavior Language) — will enable a true discipline of modular agent software engineering with broad implications for games, interactive storytelling, and social simulations.