Multi-agent Reinforcement Learning Paper Reading QPLEX

#artificialintelligence 

In the previous article, I shared the paper(you can follow the link below to recap!!!): Weighted QMIX: Expanding Monotonic Value Function Factorization for Deep Multi-Agent Reinforcement Learning, which argues that the previous approaches, such as VDN and QMIX, can only factorize a little group of tasks, and proposed a new framework to overcome the issue. In this article, I gonna share another way to factorize any factorizable task, which is called QPLEX!!! In most of the multi-agent approaches, we tend to explore the popular paradigm of centralized training with decentralized execution(CTDE). In this paradigm, individual-Global-Max(IGM) principle plays an important role. However, lots of the methods tend to relax the IGM consistency so that they can achieve scalability.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found