Goto

Collaborating Authors

 neural population data



Supplementary Marterial: Demixed shared component analysis of neural population data from multiple brain areas

Neural Information Processing Systems

We generated sequences of neuronal populations in areas X (e.g. For each combination, we generated 20 trials, resulting in 300 trials in total. Neurons in areas X and Y were affected by the stimulus and decision, and communicated with each other as follows. Neurons in area X passed the stimulus-related information to the neurons in area Y via a random projection matrix after two time steps from the time when neurons in area X started to process stimulus-related computation. After area Y received the stimulus-related input from area X, neurons in area Y started to compute the decision.


Demixed shared component analysis of neural population data from multiple brain areas

Neural Information Processing Systems

Recent advances in neuroscience data acquisition allow for the simultaneous recording of large populations of neurons across multiple brain areas while subjects perform complex cognitive tasks. Interpreting these data requires us to index how task-relevant information is shared across brain regions, but this is often confounded by the mixing of different task parameters at the single neuron level. Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA).



Reviews: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data

Neural Information Processing Systems

This paper proposes a class of random functions where each member is a spline function with the parameters produced by a neural network from Gaussian noise. The first contribution of the paper is the capability of enforcing non-negative constraints over the splines via the alternating projection method over the output of the neural network. The proposed set of spline functions are non-negative and smooth, so they are good candidate to model the intensity functions of temporal point processes. The second contribution of the paper is thus to use smooth non-negative splines to model temporal point processes which makes less strict structural assumptions of the parametric form of the intensity function. Exploring new expressive processes is one of the important problems in the domain of point processes, and this paper advances knowledge in this area.


Deep Random Splines for Point Process Intensity Estimation of Neural Population Data

Neural Information Processing Systems

Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity). Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline. Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models. We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neural population data to obtain a low-dimensional representation of spiking activity. Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.


Demixed shared component analysis of neural population data from multiple brain areas

Neural Information Processing Systems

Recent advances in neuroscience data acquisition allow for the simultaneous recording of large populations of neurons across multiple brain areas while subjects perform complex cognitive tasks. Interpreting these data requires us to index how task-relevant information is shared across brain regions, but this is often confounded by the mixing of different task parameters at the single neuron level. Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA). This yields interpretable components that express which variables are shared between different brain regions and when this information is shared across time. To illustrate our method, we reanalyze two datasets recorded during decision-making tasks in rodents and macaques.


Spectral learning of linear dynamics from generalised-linear observations with application to neural population data

Neural Information Processing Systems

Latent linear dynamical systems with generalised-linear observation models arise in a variety of applications, for example when modelling the spiking activity of populations of neurons. Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models. We use this approach to obtain estimates of parameters for a dynamical model of neural population data, where the observed spike-counts are Poisson-distributed with log-rates determined by the latent dynamical process, possibly driven by external inputs. We show that the extended system identification algorithm is consistent and accurately recovers the correct parameters on large simulated data sets with much smaller computational cost than approximate expectation-maximisation (EM) due to the non-iterative nature of subspace identification. Even on smaller data sets, it provides an effective initialization for EM, leading to more robust performance and faster convergence.


Deep Random Splines for Point Process Intensity Estimation of Neural Population Data

Loaiza-Ganem, Gabriel, Perkins, Sean, Schroeder, Karen, Churchland, Mark, Cunningham, John P.

Neural Information Processing Systems

Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity). Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline. Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models. We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neural population data to obtain a low-dimensional representation of spiking activity. Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.


Spectral learning of linear dynamics from generalised-linear observations with application to neural population data

Buesing, Lars, Macke, Jakob H., Sahani, Maneesh

Neural Information Processing Systems

Latent linear dynamical systems with generalised-linear observation models arise in a variety of applications, for example when modelling the spiking activity of populations of neurons. Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models. We use this approach to obtain estimates of parameters for a dynamical model of neural population data, where the observed spike-counts are Poisson-distributed with log-rates determined by the latent dynamical process, possibly driven by external inputs. We show that the extended system identification algorithm is consistent and accurately recovers the correct parameters on large simulated data sets with much smaller computational cost than approximate expectation-maximisation (EM) due to the non-iterative nature of subspace identification. Even on smaller data sets, it provides an effective initialization for EM, leading to more robust performance and faster convergence.