Joint Policy Search for Multi-agent Collaboration with Imperfect Information

Neural Information Processing Systems 

To learn good joint policies for multi-agent collaboration with incomplete information remains a fundamental challenge. On the other hand, directly modeling joint policy changes in incomplete information game is nontrivial due to complicated interplay of policies (e.g., upstream updates affect downstream state reachability). In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named \emph{policy-change density}. Based on this, we propose \emph{Joint Policy Search} (JPS) that iteratively improves joint policies of collaborative agents in incomplete information games, without re-evaluating the entire game. On multiple collaborative tabular games, JPS is proven to never worsen performance and can improve solutions provided by unilateral approaches (e.g, CFR), outperforming algorithms designed for collaborative policy learning (e.g. Furthermore, for real-world game whose states are too many to enumerate, \ours{} has an online form that naturally links with gradient updates.