Goto

Collaborating Authors

 deep generalized linear model


A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

Neural Information Processing Systems

Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalise across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalised linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalise well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.


Review for NeurIPS paper: A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

Neural Information Processing Systems

Additional Feedback: - The authors claim that empirically they do not need large amounts of repeated stimuli for the method to work. This empirical claim is based on only a single experimental dataset. It would be nice to see some theoretical analysis or exploration into how much data is needed for this to work -- presumably if my data has only 2 repeats of a stimulus then the h_stim auxilliary variable could be very poorly estimated. This introduces a bias into the results of the model, but how bad is this bias? Is this correction procedure provably optimal in some way?


Review for NeurIPS paper: A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

Neural Information Processing Systems

This paper presents a novel methodology to fit generalized linear models to neural data, overcoming the various limitations of existing models which are prevalent in the literature. The paper received 4 thoughtful and thorough reviews. There was significant discussion following the author response. One reviewer found that the authors did not provide significant intuition or evidence why their method prevents "runaway excitation". However, the other three reviewers believed this was secondary and argued strongly for acceptance, considering this a significant advance in GLM modeling of neural data.


A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

Neural Information Processing Systems

Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalise across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalised linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalise well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.