Goto

Collaborating Authors

 Siedler, Philipp Dominic


Learning to Communicate and Collaborate in a Competitive Multi-Agent Setup to Clean the Ocean from Macroplastics

arXiv.org Artificial Intelligence

Finding a balance between collaboration and competition is crucial for artificial agents in many real-world applications. We investigate this using a Multi-Agent Reinforcement Learning (MARL) setup on the back of a high-impact problem. The accumulation and yearly growth of plastic in the ocean cause irreparable damage to many aspects of oceanic health and the marina system. To prevent further damage, we need to find ways to reduce macroplastics from known plastic patches in the ocean. Here we propose a Graph Neural Network (GNN) based communication mechanism that increases the agents' observation space. In our custom environment, agents control a plastic collecting vessel. The communication mechanism enables agents to develop a communication protocol using a binary signal. While the goal of the agent collective is to clean up as much as possible, agents are rewarded for the individual amount of macroplastics collected. Hence agents have to learn to communicate effectively while maintaining high individual performance. We compare our proposed communication mechanism with a multi-agent baseline without the ability to communicate. Results show communication enables collaboration and increases collective performance significantly. This means agents have learned the importance of communication and found a balance between collaboration and competition.


Dynamic Collaborative Multi-Agent Reinforcement Learning Communication for Autonomous Drone Reforestation

arXiv.org Artificial Intelligence

We approach autonomous drone-based reforestation with a collaborative multi-agent reinforcement learning (MARL) setup. Agents can communicate as part of a dynamically changing network. We explore collaboration and communication on the back of a high-impact problem. Forests are the main resource to control rising CO2 conditions. Unfortunately, the global forest volume is decreasing at an unprecedented rate. Many areas are too large and hard to traverse to plant new trees. To efficiently cover as much area as possible, here we propose a Graph Neural Network (GNN) based communication mechanism that enables collaboration. Agents can share location information on areas needing reforestation, which increases viewed area and planted tree count. We compare our proposed communication mechanism with a multi-agent baseline without the ability to communicate. Results show how communication enables collaboration and increases collective performance, planting precision and the risk-taking propensity of individual agents.


The Power of Communication in a Distributed Multi-Agent System

arXiv.org Artificial Intelligence

Single-Agent (SA) Reinforcement Learning systems have shown outstanding results on non-stationary problems. However, Multi-Agent Reinforcement Learning (MARL) can surpass SA systems generally and when scaling. Furthermore, MA systems can be super-powered by collaboration, which can happen through observing others, or a communication system used to share information between collaborators. Here, we developed a distributed MA learning mechanism with the ability to communicate based on decentralised partially observable Markov decision processes (Dec-POMDPs) and Graph Neural Networks (GNNs). Minimising the time and energy consumed by training Machine Learning models while improving performance can be achieved by collaborative MA mechanisms. We demonstrate this in a real-world scenario, an offshore wind farm, including a set of distributed wind turbines, where the objective is to maximise collective efficiency. Compared to a SA system, MA collaboration has shown significantly reduced training time and higher cumulative rewards in unseen and scaled scenarios.