An Application of Multiagent Learning in Highly Dynamic Environments

Wray, Kyle Hollins (University of Massachusetts, Amherst) | Thompson, Benjamin B. (The Pennsylvania State University)

AAAI Conferences 

We explore the emergent behavior of game theoretic algorithms in a highly dynamic applied setting in which the optimal goal for the agents is constantly changing. Our focus is on a variant of the traditional predator-prey problem entitled Defender. Consisting of multiple predators and multiple prey, Defender shares similarities with rugby, soccer, and football, in addition to current problems in the field of Multiagent Systems (MAS). Observations, communications, and knowledge about the world-state are designed to be information-sparse, modeling real-world uncertainty. We propose a solution to Defender by means of the well-known multiagent learning algorithm fictitious play, and compare it with rational learning, regret matching, minimax regret, and a simple greedy strategy. We provide the modifications required to build these agents and state the implications of their application of them to our problem. We show fictitious play's performance to be superior at evenly assigning predators to prey in spite of it being an incomplete and imperfect information game that is continually changing its dimension and payoff. Interestingly, its performance is attributed to a synthesis of fictitious play, partial observability, and an anti-coordination game which reinforces the payoff of actions that were previously taken.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found