interneuron
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China > Beijing > Beijing (0.04)
The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons
Accumulating evidence suggests stochastic cortical circuits can perform sampling-based Bayesian inference to compute the latent stimulus posterior. Canonical cortical circuits consist of excitatory (E) neurons and types of inhibitory (I) interneurons. Nevertheless, nearly no sampling neural circuit models consider the diversity of interneurons, and thus how interneurons contribute to sampling remains poorly understood. To provide theoretical insight, we build a nonlinear canonical circuit model consisting of recurrently connected E neurons and two types of I neurons including Parvalbumin (PV) and Somatostatin (SOM) neurons. The E neurons are modeled as a canonical ring (attractor) model, receiving global inhibition from PV neurons, and locally tuning-dependent inhibition from SOM neurons.We theoretically analyze the nonlinear circuit dynamics and analytically identify the Bayesian sampling algorithm performed by the circuit dynamics. We found a reduced circuit with only E and PV neurons performs Langevin sampling, and the inclusion of SOM neurons with tuning-dependent inhibition speeds up the sampling via upgrading the Langevin into Hamiltonian sampling. Moreover, the Hamiltonian framework requires SOM neurons to receive no direct feedforward connections, consistent with neuroanatomy. Our work provides overarching connections between nonlinear circuits with various types of interneurons and sampling algorithms, deepening our understanding of circuit implementation of Bayesian inference.
Shaping the distribution of neural responses with interneurons in a recurrent circuit model
Efficient coding theory posits that sensory circuits transform natural signals into neural representations that maximize information transmission subject to resource constraints. Local interneurons are thought to play an important role in these transformations, shaping patterns of circuit activity to facilitate and direct information flow. However, the relationship between these coordinated, nonlinear, circuit-level transformations and the properties of interneurons (e.g., connectivity, activation functions) remains unknown. Here, we propose a normative computational model that establishes such a relationship. Our model is derived from an optimal transport objective that conceptualizes the circuit's input-response function as transforming the inputs to achieve a target response distribution. The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions. In an application motivated by redundancy reduction theory, we demonstrate that when the inputs are natural image statistics and the target distribution is a spherical Gaussian, the circuit learns a nonlinear transformation that significantly reduces statistical dependencies in neural responses. Overall, our results provide a framework in which the distribution of circuit responses is systematically and nonlinearly controlled by adjustment of interneuron connectivity and activation functions.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)
The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons
Accumulating evidence suggests stochastic cortical circuits can perform sampling-based Bayesian inference to compute the latent stimulus posterior. Canonical cortical circuits consist of excitatory (E) neurons and types of inhibitory (I) in-terneurons. Nevertheless, nearly no sampling neural circuit models consider the diversity of interneurons, and thus how interneurons contribute to sampling remains poorly understood.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Bern > Bern (0.04)
- (4 more...)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
Supplementary Information 10 Relation between low-pass filter and lookahead In general, the prospective (or lookahead) voltage u
Eqn. 3 represents the solution for a stationary energy with respect to the prospective voltage To include synaptic filtering in our theory, we introduce an additional LPF as in Eqn. 10 with time The target signal for the top-layer pyramidal is determined by the training set. We include LE in the dendritic microcircuit by two simple modifications. Learning is split into two stages: first, the learning of the so-called self-predicting state and afterwards the learning of the actual task. The full set of parameters used in Figure 1 and Figure 1 can be found in Section 15.2 . Table 1 lists all the parameters we used for the experiments shown in Figure 1 .
- North America > United States (0.14)
- Europe > France (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Israel > Central District (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Mali (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)