McBurney, Peter
The Propensity for Density in Feed-forward Models
Schoots, Nandi, Jackson, Alex, Kholmovaia, Ali, McBurney, Peter, Shanahan, Murray
Does the process of training a neural network to solve a task tend to use all of the available weights even when the task could be solved with fewer weights? To address this question we study the effects of pruning fully connected, convolutional and residual models while varying their widths. We find that the proportion of weights that can be pruned without degrading performance is largely invariant to model size. Increasing the width of a model has little effect on the density of the pruned model relative to the increase in absolute size of the pruned network. In particular, we find substantial prunability across a large range of model sizes, where our biggest model is 50 times as wide as our smallest model. We explore three hypotheses that could explain these findings.
Mimicry and the Emergence of Cooperative Communication
Cope, Dylan, McBurney, Peter
In many situations, communication between agents is a critical component of cooperative multi-agent systems, however, it can be difficult to learn or evolve. In this paper, we investigate a simple way in which the emergence of communication may be facilitated. Namely, we explore the effects of when agents can mimic preexisting, externally generated useful signals. The key idea here is that these signals incentivise listeners to develop positive responses, that can then also be invoked by speakers mimicking those signals. This investigation starts with formalising this problem, and demonstrating that this form of mimicry changes optimisation dynamics and may provide the opportunity to escape non-communicative local optima. We then explore the problem empirically with a simulation in which spatially situated agents must communicate to collect resources. Our results show that both evolutionary optimisation and reinforcement learning may benefit from this intervention.
The Topos of Transformer Networks
Villani, Mattia Jacopo, McBurney, Peter
The transformer neural network has significantly out-shined all other neural network architectures as the engine behind large language models. We provide a theoretical analysis of the expressivity of the transformer architecture through the lens of topos theory. From this viewpoint, we show that many common neural network architectures, such as the convolutional, recurrent and graph convolutional networks, can be embedded in a pretopos of piecewise-linear functions, but that the transformer necessarily lives in its topos completion. In particular, this suggests that the two network families instantiate different fragments of logic: the former are first order, whereas transformers are higher-order reasoners. Furthermore, we draw parallels with architecture search and gradient descent, integrating our analysis in the framework of cybernetic agents.
Learning Translations: Emergent Communication Pretraining for Cooperative Language Acquisition
Cope, Dylan, McBurney, Peter
In Emergent Communication (EC) agents learn to communicate with one another, but the protocols that they develop are specialised to their training community. This observation led to research into Zero-Shot Coordination (ZSC) for learning communication strategies that are robust to agents not encountered during training. However, ZSC typically assumes that no prior data is available about the agents that will be encountered in the zero-shot setting. In many cases, this presents an unnecessarily hard problem and rules out communication via preestablished conventions. We propose a novel AI challenge called a Cooperative Language Acquisition Problem (CLAP) in which the ZSC assumptions are relaxed by allowing a 'joiner' agent to learn from a dataset of interactions between agents in a target community. We propose and compare two methods for solving CLAPs: Imitation Learning (IL), and Emergent Communication pretraining and Translation Learning (ECTL), in which an agent is trained in self-play with EC and then learns from the data to translate between the emergent protocol and the target community's protocol.
A Measure of Explanatory Effectiveness
Cope, Dylan, McBurney, Peter
The term explanation in artificial intelligence (AI) is often conflated with the concepts of interpretability and explainable AI (XAI), but there are important distinctions to be made. Miller (2019) defines interpretability and XAI as the process of building AI systems that humans can understand. In other words, by design, the AI's decision-making process is inherently transparent to a human. In contrast, explicitly explaining the decision-making to an arbitrary human is explanation generation. The latter is the subject of this paper. More specifically, we are working towards developing a formal framework for the automated generation and assessment of explanations. Firstly, some key terminology: an explanation is generated through a dialectical interaction whereby one agent, the explainer, seeks to'explain' some phenomenon, called the explanandum, to another agent, the explainee. In this article, we propose a novel measure of explanatory effectiveness that can be used to motivate artificial agents to generate good explanations (e.g. in the form of a reward signal), or to analyse the behaviours of existing communicating agents. We then define explanation games as cooperative games where two (or more) agents seek to maximise the effectiveness measure.
Joining the Conversation: Towards Language Acquisition for Ad Hoc Team Play
Cope, Dylan, McBurney, Peter
In this paper, we propose and consider the problem of cooperative language acquisition as a particular form of the ad hoc team play problem. We then present a probabilistic model for inferring a speaker's intentions and a listener's semantics from observing communications between a team of language-users. This model builds on the assumptions that speakers are engaged in positive signalling and listeners are exhibiting positive listening, which is to say the messages convey hidden information from the listener, that then causes them to change their behaviour. Further, it accounts for potential sub-optimality in the speaker's ability to convey the right information (according to the given task). Finally, we discuss further work for testing and developing this framework.
Unwrapping All ReLU Networks
Villani, Mattia Jacopo, McBurney, Peter
Deep ReLU Networks can be decomposed into a collection of linear models, each defined in a region of a partition of the input space. This paper provides three results extending this theory. First, we extend this linear decompositions to Graph Neural networks and tensor convolutional networks, as well as networks with multiplicative interactions. Second, we provide proofs that neural networks can be understood as interpretable models such as Multivariate Decision trees and logical theories. Finally, we show how this model leads to computing cheap and exact SHAP values.
The Influence of Memory in Multi-Agent Consensus
Marzagรฃo, David Kohan, Bonatto, Luciana Basualdo, Madeira, Tiago, Gauy, Marcelo Matheus, McBurney, Peter
Multi-agent consensus problems can often be seen as a sequence of autonomous and independent local choices between a finite set of decision options, with each local choice undertaken simultaneously, and with a shared goal of achieving a global consensus state. Being able to estimate probabilities for the different outcomes and to predict how long it takes for a consensus to be formed, if ever, are core issues for such protocols. Little attention has been given to protocols in which agents can remember past or outdated states. In this paper, we propose a framework to study what we call \emph{memory consensus protocol}. We show that the employment of memory allows such processes to always converge, as well as, in some scenarios, such as cycles, converge faster. We provide a theoretical analysis of the probability of each option eventually winning such processes based on the initial opinions expressed by agents. Further, we perform experiments to investigate network topologies in which agents benefit from memory on the expected time needed for consensus.
Formalizing Scenario Analysis
McBurney, Peter, Parsons, Simon
We propose a formal treatment of scenarios in the context of a dialectical argumentation formalism for qualitative reasoning about uncertain propositions. Our formalism extends prior work in which arguments for and against uncertain propositions were presented and compared in interaction spaces called Agoras. We now define the notion of a scenario in this framework and use it to define a set of qualitative uncertainty labels for propositions across a collection of scenarios. This work is intended to lead to a formal theory of scenarios and scenario analysis.
A Simple Logical Approach to Reasoning with and about Trust
Parsons, Simon (Brooklyn College City University of New York) | Sklar, Elizabeth (Brooklyn College, City University of New York) | McBurney, Peter (University of Liverpool)
Trust is an approach to managing the uncertainty about autonomous entities and the information they store, and so can play an important role in any decentralized system. As a result, trust has been widely studied in multiagent systems and related fields such as the semantic web. Here we introduce a simple approach to reasoning about trust with logi