Locally Learned Synaptic Dropout for Complete Bayesian Inference

McKee, Kevin L., Crandell, Ian C., Chaudhuri, Rishidev, O'Reilly, Randall C.

arXiv.org Machine Learning 

The Bayesian brain hypothesis postulates that the brain accurately operates on statistical distributions according to Bayes' theorem. The random failure of presynaptic vesicles to release neurotransmitters may allow the brain to sample from posterior distributions of network parameters, interpreted as epistemic uncertainty. It has not been shown previously how random failures might allow networks to sample from observed distributions, also known as aleatoric or residual uncertainty. Sampling from both distributions enables probabilistic inference, efficient search, and creative or generative problem solving. We demonstrate that under a population-code based interpretation of neural activity, both types of distribution can be represented and sampled with synaptic failure alone. We first define a biologically constrained neural network and sampling scheme based on synaptic failure and lateral inhibition. Within this framework, we derive dropout based epistemic uncertainty, then prove an analytic mapping from synaptic efficacy to release probability that allows networks to sample from arbitrary, learned distributions represented by a receiving layer. Second, our result leads to a local learning rule by which synapses adapt their release probabilities. Our result demonstrates complete Bayesian inference, related to the variational learning method of dropout, in a biologically constrained network using only locally-learned synaptic failure rates. Introduction The Bayesian Brain hypothesis has led to a number of important insights about neural coding in the brain (Knill and Pouget, 2004; Friston, 2010, 2012; Pouget et al., 2013; Lee and Mumford, 2003) by characterizing neural representation and processing in terms of formal probabilistic inference and sampling. Furthermore, the introduction of related probabilistic representations and sampling processes in modern deep learning variational models has led to improved performance on a range of different tasks (Zhang et al., 2019; Blei et al., 2017; Kingma and Welling, 2014; Detorakis et al., 2019). The widely-used dropout technique in deep learning can be seen as a form of variational inference and sampling (Srivastava et al., 2014; Gal and Ghahramani, 2016) with direct analogy to the random failure of synapses in the brain. This link has led to biologically-motivated models of variational deep learning that use network weight dropout to simulate synaptic failure (Mostafa and Cauwenberghs, 2018; Wan et al., 2013; Neftci et al., 2016). In this paper, we build on these and other recent findings in machine learning and neurobiology to show how the brain can accurately represent the two primary components of probabilistic inference, distributions of observed data and distributions of unobserved values (such as model parameters), with the single, biologically established mechanism of synaptic failure.