Exploiting Anonymity in Approximate Linear Programming: Scaling to Large Multiagent MDPs
Robbel, Philipp (Massachusetts Institute of Technology) | Oliehoek, Frans A. (University of Amsterdam) | Kochenderfer, Mykel J. (Stanford University)
The Markov Decision Process (MDP) framework is a versatile method for addressing single and multiagent sequential decision making problems. Many exact and approximate solution methods attempt to exploit structure in the problem and are based on value factorization. Especially multiagent settings (MAS), however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are overly restricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of MASs, exploiting a property that can be thought of as "anonymous influence" in the factored MDP. In particular, we show how anonymity can lead to representational and computational efficiencies, both for general variable elimination in a factor graph but also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear programming to factored MDPs that were previously unsolvable. Our results are shown for a disease control domain over a graph with 50 nodes that are each connected with up to 15 neighbors.
Nov-1-2015
- Country:
- North America
- Canada > Ontario (0.14)
- United States > Texas (0.14)
- North America
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine
- Epidemiology (0.35)
- Public Health (0.35)
- Health & Medicine
- Technology: