Silva, Michael (PARC, A Xerox Company) | McCroskey, Silas (PARC, A Xerox Company) | Rubin, Jonathan (PARC, A Xerox Company) | Youngblood, Michael (PARC, A Xerox Company) | Ram, Ashwin (PARC, A Xerox Company)
We present an approach that uses learning from demonstration in a computer role playing game to create a controller for a companion team member. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams. The results of our study show that human-agent teams were more successful at task completion and, for some qualitative dimensions, hybrid teams were perceived more favorably than human-only teams.
It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies. AI has notably been applied to board games, such as chess, Scrabble, and backgammon, creating competition that has sped the development of many heuristicbased search techniques (Schaeffer 2001).
This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision-making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academia and industry, finding the academic research heavily focused on creating game-winning agents, while the indus- try aims to maximise player enjoyment. It finds the industry adoption of academic research is low because it is either in- applicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investi- gation: bridging the gap between academia and industry. Fi- nally, the areas of spatial reasoning, multi-scale AI, and co- operation are found to require future work, and standardised evaluation methods are proposed to produce comparable re- sults between studies.
Molineaux, Matthew (Knexus Research Corporation) | Floyd, Michael W. (Knexus Research Corporation) | Dannenhauer, Dustin (United States Naval Research Laboratory) | Aha, David W. (United States Naval Research Laboratory)
Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.
This paper focuses on case acquisition strategies in the context of Case-based Learning from Observation (CBLfO). In Learning from Observation (LfO), a system learns behaviors by observing an expert rather than being explicitly programmed. Specifically, we focus on the problem of learning behaviors from experts that reason using internal state information, that is, information that can not be directly observed. The unobservability of this state information means that the behaviors can not be represented by a simple perception-to-action mapping. We propose a new case acquisition strategy called "Similarity-based Chunking", and compare it with existing strategies to address this problem. Additionally, since standard classification accuracy in predicting the expert's actions is known to be a poor measure for evaluating LfO systems, we propose a new evaluation procedure based on two complementary metrics: behavior performance and similarity with the expert.