Plotting

 Tschantz, Alexander


Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm

arXiv.org Artificial Intelligence

The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules. We have previously shown that the algorithm can be further simplified and made more biologically plausible by (i) introducing a learnable set of backwards weights, which overcomes the weight-transport problem, and (ii) avoiding the computation of nonlinear derivatives at each neuron. However, tthe efficacy of these simplifications has, so far, only been tested on simple multi-layer-perceptron (MLP) networks. Here, we show that these simplifications still maintain performance using more complex CNN architectures and challenging datasets, which have proven difficult for other biologically-plausible schemes to scale to. We also investigate whether another biologically implausible assumption of the original AR algorithm - the frozen feedforward pass - can be relaxed without damaging performance. The backpropagation of error algorithm (backprop) has been the engine driving the successes of modern machine learning with deep neural networks.


Relaxing the Constraints on Predictive Coding Models

arXiv.org Artificial Intelligence

Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors. While motivated by high-level notions of variational inference, detailed neurophysiological models of cortical microcircuits which can implements its computations have been developed. Moreover, under certain conditions, predictive coding has been shown to approximate the backpropagation of error algorithm, and thus provides a relatively biologically plausible credit-assignment mechanism for training deep networks. However, standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity. In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance. Our work thus relaxes current constraints on potential microcircuit designs and hopefully opens up new regions of the design-space for neuromorphic implementations of predictive coding.


Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

arXiv.org Artificial Intelligence

Can the powerful backpropagation of error (backprop) reinforcement learning algorithm be formulated in a manner suitable for implementation in neural circuitry? The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global (error) signals, as in orthodox backprop. Recently several algorithms for approximating backprop using only local signals, such as predictive coding and equilibrium-prop, have been proposed. However, these algorithms typically impose other requirements which challenge biological plausibility: for example, requiring complex and precise connectivity schemes (predictive coding), or multiple sequential backwards phases with information being stored across phases (equilibrium-prop). Here, we propose a novel local algorithm, Activation Relaxation (AR), which is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system. Our algorithm converges robustly and exactly to the correct backpropagation gradients, requires only a single type of neuron, utilises only a single backwards phase, and can perform credit assignment on arbitrary computation graphs. We illustrate these properties by training deep neural networks on visual classification tasks, and we describe simplifications to the algorithm which remove further obstacles to neurobiological implementation (for example, the weight-transport problem, and the use of nonlinear derivatives), while preserving performance.


Control as Hybrid Inference

arXiv.org Artificial Intelligence

The field of reinforcement learning can be split into model-based and model-free methods. Here, we unify these approaches by casting model-free policy optimisation as amortised variational inference, and model-based planning as iterative variational inference, within a `control as hybrid inference' (CHI) framework. We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference. Using a didactic experiment, we demonstrate that the proposed algorithm operates in a model-based manner at the onset of learning, before converging to a model-free algorithm once sufficient data have been collected. We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines. CHI thus provides a principled framework for harnessing the sample efficiency of model-based planning while retaining the asymptotic performance of model-free policy optimisation.


Whence the Expected Free Energy?

arXiv.org Artificial Intelligence

The Expected Free Energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its importance, the mathematical origins of this quantity and its relation to the Variational Free Energy (VFE) remain unclear. In this paper, we investigate the origins of the EFE in detail and show that it is not simply "the free energy in the future". We present a functional that we argue is the natural extension of the VFE, but which actively discourages exploratory behaviour, thus demonstrating that exploration does not directly follow from free energy minimization into the future. We then develop a novel objective, the Free-Energy of the Expected Future (FEEF), which possesses both the epistemic component of the EFE as well as an intuitive mathematical grounding as the divergence between predicted and desired futures.


Reinforcement Learning as Iterative and Amortised Inference

arXiv.org Artificial Intelligence

There are several ways to categorise reinforcement learning (RL) algorithms, such as either model-based or model-free, policy-based or planning-based, on-policy or off-policy, and online or offline. Broad classification schemes such as these help provide a unified perspective on disparate techniques and can contextualise and guide the development of new algorithms. In this paper, we utilise the control as inference framework to outline a novel classification scheme based on amortised and iterative inference. We demonstrate that a wide range of algorithms can be classified in this manner providing a fresh perspective and highlighting a range of existing similarities. Moreover, we show that taking this perspective allows us to identify parts of the algorithmic design space which have been relatively unexplored, suggesting new routes to innovative RL algorithms.


On the Relationship Between Active Inference and Control as Inference

arXiv.org Artificial Intelligence

Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence. Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem. While these frameworks both consider action selection through the lens of variational inference, their relationship remains unclear. Here, we provide a formal comparison between them and demonstrate that the primary difference arises from how value is incorporated into their respective generative models. In the context of this comparison, we highlight several ways in which these frameworks can inform one another.