Tuning Belief Revision for Coordination with Inconsistent Teammates
Sarratt, Trevor (University of California Santa Cruz) | Jhala, Arnav (University of California Santa Cruz)
Coordination with an unknown human teammate is a notable challenge for cooperative agents. Behavior of human players in games with cooperating AI agents is often sub-optimal and inconsistent leading to choreographed and limited cooperative scenarios in games. This paper considers the difficulty of cooperating with a teammate whose goal and corresponding behavior change periodically. Previous work uses Bayesian models for updating beliefs about cooperating agents based on observations. We describe belief models for on-line planning, discuss tuning in the presence of noisy observations, and demonstrate empirically its effectiveness in coordinating with inconsistent agents in a simple domain. Further work in this area promises to lead to techniques for more interesting cooperative AI in games.
Nov-1-2015
- Country:
- North America > United States > California > Santa Cruz County > Santa Cruz (0.14)
- Industry:
- Leisure & Entertainment > Games (0.48)
- Technology: