Tuning Belief Revision for Coordination with Inconsistent Teammates

Sarratt, Trevor (University of California Santa Cruz) | Jhala, Arnav (University of California Santa Cruz)

AAAI Conferences 

Coordination with an unknown human teammate is a notable challenge for cooperative agents. Behavior of human players in games with cooperating AI agents is often sub-optimal and inconsistent leading to choreographed and limited cooperative scenarios in games. This paper considers the difficulty of cooperating with a teammate whose goal and corresponding behavior change periodically. Previous work uses Bayesian models for updating beliefs about cooperating agents based on observations. We describe belief models for on-line planning, discuss tuning in the presence of noisy observations, and demonstrate empirically its effectiveness in coordinating with inconsistent agents in a simple domain. Further work in this area promises to lead to techniques for more interesting cooperative AI in games.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found