Not enough data to create a plot.
Try a different view from the menu above.
Meier, Karlheinz
Stochasticity from function - why the Bayesian brain may need no noise
Dold, Dominik, Bytschok, Ilja, Kungl, Akos F., Baumbach, Andreas, Breitwieser, Oliver, Senn, Walter, Schemmel, Johannes, Meier, Karlheinz, Petrovici, Mihai A.
An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial.
Pattern representation and recognition with accelerated analog neuromorphic systems
Petrovici, Mihai A., Schmitt, Sebastian, Klähn, Johann, Stöckel, David, Schroeder, Anna, Bellec, Guillaume, Bill, Johannes, Breitwieser, Oliver, Bytschok, Ilja, Grübl, Andreas, Güttler, Maurice, Hartel, Andreas, Hartmann, Stephan, Husmann, Dan, Husmann, Kai, Jeltsch, Sebastian, Karasenko, Vitali, Kleider, Mitja, Koke, Christoph, Kononov, Alexander, Mauch, Christian, Müller, Eric, Müller, Paul, Partzsch, Johannes, Pfeil, Thomas, Schiefer, Stefan, Scholze, Stefan, Subramoney, Anand, Thanasoulis, Vasilis, Vogginger, Bernhard, Legenstein, Robert, Maass, Wolfgang, Schüffny, René, Mayr, Christian, Schemmel, Johannes, Meier, Karlheinz
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust architectures and a training method for hardware-emulated networks that functions without perfect knowledge of the system's dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.
Robustness from structure: Inference with hierarchical spiking networks on analog neuromorphic hardware
Petrovici, Mihai A., Schroeder, Anna, Breitwieser, Oliver, Grübl, Andreas, Schemmel, Johannes, Meier, Karlheinz
How spiking networks are able to perform probabilistic inference is an intriguing question, not only for understanding information processing in the brain, but also for transferring these computational principles to neuromorphic silicon circuits. A number of computationally powerful spiking network models have been proposed, but most of them have only been tested, under ideal conditions, in software simulations. Any implementation in an analog, physical system, be it in vivo or in silico, will generally lead to distorted dynamics due to the physical properties of the underlying substrate. In this paper, we discuss several such distortive effects that are difficult or impossible to remove by classical calibration routines or parameter training. We then argue that hierarchical networks of leaky integrate-and-fire neurons can offer the required robustness for physical implementation and demonstrate this with both software simulations and emulation on an accelerated analog neuromorphic device.
Stochastic inference with spiking neurons in the high-conductance state
Petrovici, Mihai A., Bill, Johannes, Bytschok, Ilja, Schemmel, Johannes, Meier, Karlheinz
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
The high-conductance state enables neural sampling in networks of LIF neurons
Petrovici, Mihai A., Bytschok, Ilja, Bill, Johannes, Schemmel, Johannes, Meier, Karlheinz
The apparent stochasticity of in-vivo neural circuits has long been hypothesized to represent a signature of ongoing stochastic inference in the brain. More recently, a theoretical framework for neural sampling has been proposed, which explains how sample-based inference can be performed by networks of spiking neurons. One particular requirement of this approach is that the neural response function closely follows a logistic curve. Analytical approaches to calculating neural response functions have been the subject of many theoretical studies. In order to make the problem tractable, particular assumptions regarding the neural or synaptic parameters are usually made. However, biologically significant activity regimes exist which are not covered by these approaches: Under strong synaptic bombardment, as is often the case in cortex, the neuron is shifted into a high-conductance state (HCS) characterized by a small membrane time constant. In this regime, synaptic time constants and refractory periods dominate membrane dynamics. The core idea of our approach is to separately consider two different "modes" of spiking dynamics: burst spiking and transient quiescence, in which the neuron does not spike for longer periods. We treat the former by propagating the PDF of the effective membrane potential from spike to spike within a burst, while using a diffusion approximation for the latter. We find that our prediction of the neural response function closely matches simulation data. Moreover, in the HCS scenario, we show that the neural response function becomes symmetric and can be well approximated by a logistic function, thereby providing the correct dynamics in order to perform neural sampling. We hereby provide not only a normative framework for Bayesian inference in cortex, but also powerful applications of low-power, accelerated neuromorphic systems to relevant machine learning tasks.
Stochastic inference with deterministic spiking neurons
Petrovici, Mihai A., Bill, Johannes, Bytschok, Ilja, Schemmel, Johannes, Meier, Karlheinz
The seemingly stochastic transient dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference. In vitro neurons, on the other hand, exhibit a highly deterministic response to various types of stimulation. We show that an ensemble of deterministic leaky integrate-and-fire neurons embedded in a spiking noisy environment can attain the correct firing statistics in order to sample from a well-defined target distribution. We provide an analytical derivation of the activation function on the single cell level; for recurrent networks, we examine convergence towards stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model
Millner, Sebastian, Grübl, Andreas, Meier, Karlheinz, Schemmel, Johannes, Schwartz, Marc-olivier
We describe an accelerated hardware neuron being capable of emulating the adap-tive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulation and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model.
Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
Schürmann, Felix, Meier, Karlheinz, Schemmel, Johannes
Computation without stable states is a computing paradigm different fromTuring's and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation arereproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network.