pyramidal cell
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Education (0.68)
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Education (0.68)
Review for NeurIPS paper: A simple normative network approximates local non-Hebbian learning in the cortex
Weaknesses: The empirical evaluation is one of the weakest aspects of the paper. The fact that this is done on only one, seemingly arbitrarily chosen, dataset diminishes the significance of the results. I would have liked to see evaluation on more standard datasets. There are some aspects of the biological mapping that may not be biologically plausible: - Only linear model are considered. In biology, pyramidal cells are known to have many non-linear effects.
A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data
Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data.
Learning efficient backprojections across cortical hierarchies in real time
Max, Kevin, Kriener, Laura, García, Garibaldi Pineda, Nowotny, Thomas, Senn, Walter, Petrovici, Mihai A.
Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas. In deep learning, a known solution is error backpropagation, which however requires biologically implausible weight transport from feed-forward to feedback paths. We introduce Phaseless Alignment Learning (PAL), a bio-plausible method to learn efficient feedback weights in layered cortical hierarchies. This is achieved by exploiting the noise naturally found in biophysical systems as an additional carrier of information. In our dynamical system, all weights are learned simultaneously with always-on plasticity and using only information locally available to the synapses. Our method is completely phase-free (no forward and backward passes or phased learning) and allows for efficient error propagation across multi-layer cortical hierarchies, while maintaining biologically plausible signal transport and learning. Our method is applicable to a wide class of models and improves on previously known biologically plausible ways of credit assignment: compared to random synaptic feedback, it can solve complex tasks with less neurons and learn more useful latent representations. We demonstrate this on various classification tasks using a cortical microcircuit model with prospective coding.
- North America > United States (0.28)
- Europe > United Kingdom (0.28)
- Energy > Oil & Gas (1.00)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
The Convergence of AI code and Cortical Functioning -- a Commentary
Neural nets, one of the oldest architectures for AI programming, are loosely based on biological neurons and their properties. Recent work on language applications has made the AI code closer to biological reality in several ways. This commentary examines this convergence and, in light of what is known of neocortical structure, addresses the question of whether "general AI" looks attainable with these tools. One of the earliest ideas for programming Artificial Intelligence was to imitate neurons and their connectivity with neural nets. In the turbulent boom and bust evolution of AI, this remained a theme with strong adherents, but it fell out of the mainstream until around 2010 when these ideas were implemented with really huge datasets and really fast computers. The field of AI has now had a decade of tremendous progress in which neural nets, along with some major improvements, have been the central character. The purpose of this post is to describe the further parallels between the software implementation of AI and the instantiation of cognitive intelligence in mammalian brains. I conjecture that, for better or for worse, all future instances of artificial intelligence will be driven to use these algorithms even though they are opaque and resist simple explanations of why they do what they do. This commentary follows my blog post here. Such a function is always diagrammed as a set of layers, with functions φ computing the next higher layer from the layer below and the running value after each composition being called the "activity" x The components of these activities are called "units", as these are supposed to correspond to neurons in the biological interpretation.
Dendritic cortical microcircuits approximate the backpropagation algorithm
Sacramento, João, Costa, Rui Ponte, Bengio, Yoshua, Senn, Walter
Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience. However, the main learning mechanism behind these advances – error backpropagation – appears to be at odds with neurobiology. Here, we introduce a multilayer neuronal network model with simplified dendritic compartments in which error-driven synaptic plasticity adapts the network towards a global desired output. In contrast to previous work our model does not require separate phases and synaptic learning is driven by local dendritic prediction errors continuously in time. Such errors originate at apical dendrites and occur due to a mismatch between predictive input from lateral interneurons and activity from actual top-down feedback. Through the use of simple dendritic compartments and different cell-types our model can represent both error and normal activity within a pyramidal neuron. We demonstrate the learning capabilities of the model in regression and classification tasks, and show analytically that it approximates the error backpropagation algorithm. Moreover, our framework is consistent with recent observations of learning between brain areas and the architecture of cortical microcircuits. Overall, we introduce a novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)
Dendritic cortical microcircuits approximate the backpropagation algorithm
Sacramento, João, Costa, Rui Ponte, Bengio, Yoshua, Senn, Walter
Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience. However, the main learning mechanism behind these advances – error backpropagation – appears to be at odds with neurobiology. Here, we introduce a multilayer neuronal network model with simplified dendritic compartments in which error-driven synaptic plasticity adapts the network towards a global desired output. In contrast to previous work our model does not require separate phases and synaptic learning is driven by local dendritic prediction errors continuously in time. Such errors originate at apical dendrites and occur due to a mismatch between predictive input from lateral interneurons and activity from actual top-down feedback. Through the use of simple dendritic compartments and different cell-types our model can represent both error and normal activity within a pyramidal neuron. We demonstrate the learning capabilities of the model in regression and classification tasks, and show analytically that it approximates the error backpropagation algorithm. Moreover, our framework is consistent with recent observations of learning between brain areas and the architecture of cortical microcircuits. Overall, we introduce a novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)