Learning to Improve Representations by Communicating About Perspectives
Taylor, Julius, Nisioti, Eleni, Moulin-Frier, Clément
–arXiv.org Artificial Intelligence
Effective latent representations need to capture abstract features of the external world. We hypothesise that the necessity for a group of agents to reconcile their subjective interpretations of a shared environment state is an essential factor influencing this property. To test this hypothesis, we propose an architecture where individual agents in a population receive different observations of the same underlying state and learn latent representations that they communicate to each other. We highlight a fundamental link between emergent communication and representation learning: the role of language as a cognitive tool and the opportunities conferred by subjectivity, an inherent property of most multi-agent systems. We present a minimal architecture comprised of a population of autoencoders, where we define loss functions, capturing different aspects of effective communication, and examine their effect on the learned representations. We show that our proposed architecture allows the emergence of aligned representations. The subjectivity introduced by presenting agents with distinct perspectives of the environment state contributes to learning abstract representations that outperform those learned by both a single autoencoder and a population of autoencoders, presented with identical perspectives. Altogether, our results demonstrate how communication from subjective perspectives can lead to the acquisition of more abstract representations in multi-agent systems, opening promising perspectives for future research at the intersection of representation learning and emergent communication.
arXiv.org Artificial Intelligence
Sep-20-2021