Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation
Nayak, Siddharth, Choi, Kenneth, Ding, Wenqi, Dolan, Sydney, Gopalakrishnan, Karthik, Balakrishnan, Hamsa
–arXiv.org Artificial Intelligence
In such cases, multiple agents may need to work together and share information in order to accomplish the task (Tan, We consider the problem of multi-agent navigation 1993b). Naïve extensions of single-agent RL algorithms and collision avoidance when observations to multi-agent settings do not work well because of the are limited to the local neighborhood of each non-stationarity in the environment, i.e., the actions of one agent. We propose InforMARL, a novel architecture agent affect the actions of others (Tan, 1993a; Tampuu et al., for multi-agent reinforcement learning 2015). Furthermore, tasks may require cooperation among (MARL) which uses local information intelligently the agents. Classical approaches to optimal planning may to compute paths for all the agents in a (1) be computationally intractable, especially for real-time decentralized manner. Specifically, InforMARL applications, and (2) be unable to account for complex interactions aggregates information about the local neighborhood and shared objectives between multiple agents. The of agents for both the actor and the critic ability of RL to learn by trial-and-error makes it well-suited using a graph neural network and can be used in for problems in which optimization-based methods are not conjunction with any standard MARL algorithm.
arXiv.org Artificial Intelligence
May-16-2023
- Country:
- North America > United States (1.00)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government > Regional Government
- Leisure & Entertainment > Games (0.67)
- Transportation (0.87)
- Technology: