Goto

Collaborating Authors

 sahani


aa1f5f73327ba40d47ebce155e785aaf-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all the reviewers for their thoughtful comments and their enthusiasm for our work. These results are consistent with those of Zoltowski et al. [2020], where they found Laplace EM compared Section 3. Segmenting the continuous latent states for each population (which is equivalent to imposing hard constraints On top of that, the "sticky" parameterization of discrete state transitions reveals which neural populations C. elegans offers an illustrative demonstration of the mp-srSLDS For example, we explore interactions between ganglia in Appendix C. Thanks again for spending the time to provide valuable feedback on our work.



'They don't just fall out of trees': Nobel awards highlight Britain's AI pedigree

The Guardian

It was more than even the most ardent advocates expected. After all the demonstrations of superhuman prowess, and the debates over whether the technology was humanity's best invention yet or its surest route to self-destruction, artificial intelligence landed a Nobel prize this week. And then it landed another. First came the physics prize. The American John Hopfield and the British-Canadian Geoffrey Hinton won for foundational work on artificial neural networks, the computational architecture that underpins modern AI such as ChatGPT.


Unsupervised representation learning with recognition-parametrised probabilistic models

Walker, William I., Soulat, Hugo, Yu, Changmin, Sahani, Maneesh

arXiv.org Artificial Intelligence

We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model (RPM): a normalised semi-parametric hypothesis class for joint distributions over observed and latent variables. Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior and observation-conditioned latent distributions with non-parametric observation marginals. This approach leads to a flexible learnt recognition model capturing latent dependence between observations, without the need for an explicit, parametric generative model. The RPM admits exact maximum-likelihood learning for discrete latents, even for powerful neural-network-based recognition. We develop effective approximations applicable in the continuous-latent case. Experiments demonstrate the effectiveness of the RPM on high-dimensional data, learning image classification from weak indirect supervision; direct image-level latent Dirichlet allocation; and recognition-parametrised Gaussian process factor analysis (RP-GPFA) applied to multi-factorial spatiotemporal datasets. The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence.


On the Separation of Signals from Neighboring Cells in Tetrode Recordings

Sahani, Maneesh, Pezaris, John S., Andersen, Richard A.

Neural Information Processing Systems

We discuss a solution to the problem of separating waveforms produced by multiple cells in an extracellular neural recording. We take an explicitly probabilistic approach, using latent-variable models of varying sophistication to describe the distribution of waveforms produced by a single cell. The models range from a single Gaussian distribution of waveforms for each cell to a mixture of hidden Markov models. We stress the overall statistical structure of the approach, allowing the details of the generative model chosen to depend on the specific neural preparation.


On the Separation of Signals from Neighboring Cells in Tetrode Recordings

Sahani, Maneesh, Pezaris, John S., Andersen, Richard A.

Neural Information Processing Systems

We discuss a solution to the problem of separating waveforms produced by multiple cells in an extracellular neural recording. We take an explicitly probabilistic approach, using latent-variable models of varying sophistication to describe the distribution of waveforms produced by a single cell. The models range from a single Gaussian distribution of waveforms for each cell to a mixture of hidden Markov models. We stress the overall statistical structure of the approach, allowing the details of the generative model chosen to depend on the specific neural preparation.


On the Separation of Signals from Neighboring Cells in Tetrode Recordings

Sahani, Maneesh, Pezaris, John S., Andersen, Richard A.

Neural Information Processing Systems

We discuss a solution to the problem of separating waveforms produced bymultiple cells in an extracellular neural recording. We take an explicitly probabilistic approach, using latent-variable models ofvarying sophistication to describe the distribution of waveforms producedby a single cell. The models range from a single Gaussian distribution of waveforms for each cell to a mixture of hidden Markov models. We stress the overall statistical structure of the approach, allowing the details of the generative model chosen to depend on the specific neural preparation.