Attention-Guided Contrastive Role Representations for Multi-Agent Reinforcement Learning

Hu, Zican, Zhang, Zongzhang, Li, Huaxiong, Chen, Chunlin, Ding, Hongyu, Wang, Zhi

arXiv.org Artificial Intelligence 

Cooperative multi-agent reinforcement learning (MARL) aims to coordinate a system of agents towards optimizing global returns (Vinyals et al., 2019), and has witnessed significant prospects in various domains, such as autonomous vehicles (Zhou et al., 2020), smart grid (Chen et al., 2021a), robotics (Yu et al., 2023), and social science (Leibo et al., 2017). Training reliable control policies for coordinating such systems remains a major challenge. The centralized training with decentralized execution (CTDE) (Foerster et al., 2016) hybrids the merits of independent Q-learning (Foerster et al., 2017) and joint action learning (Sukhbaatar et al., 2016), and becomes a compelling paradigm that exploits the centralized training opportunity for training fully decentralized policies (Wang et al., 2023). Subsequently, numerous popular algorithms are proposed, including VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2020), MAAC (Iqbal & Sha, 2019), and MAPPO (Yu et al., 2022). Sharing policy parameters is crucial for scaling these algorithms to massive agents with accelerated cooperation learning (Fu et al., 2022). However, it is widely observed that agents tend to acquire homogeneous behaviors, which might hinder diversified exploration and sophisticated coordination (Christianos et al., 2021). Some methods (Li et al., 2021; Jiang & Lu, 2021; Liu et al., 2023) attempt to promote individualized behaviors by distinguishing each agent from the others, while they often neglect the prospect of effective team composition with implicit task allocation. Real-world multi-agent tasks usually involve dynamic team composition with the emergence of roles (Shao et al., 2022; Hu et al., 2022).