lfad
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
Review for NeurIPS paper: Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE
Additional Feedback: On Reproducibility: - I think the basic methodology could be replicated, but it would have been nice to include code as a supplementary material. I hope the authors can assure me that the code will be documented and made available upon publication. On The Motor Cortex Dataset: - The VAE and pi-VAE seem to perform similarity in panel i and panel m - The better performance of pi-VAE in panel h vs l is likely due to the input variable "u" which forces different latent represenations (update: after writing this, I noticed that this is indeed the case based on supplementary figure S1; though pi-VAE is still slightly better). This is fine, but perhaps makes the result unsurprising -- wouldn't other supervised methods (e.g. On The Hippocampal Dataset: - For fig 4B, I think that linear discriminant analysis (LDA) would be sufficient to get you separation between the two running directions --- i.e. this would recover "latent 1".
Diffusion-Based Generation of Neural Activity from Disentangled Latent Codes
McCart, Jonathan D., Sedler, Andrew R., Versteeg, Christopher, Mifsud, Domenick, Rigotti-Thompson, Mattia, Pandarinath, Chethan
Recent advances in recording technology have allowed neuroscientists to monitor activity from thousands of neurons simultaneously. Latent variable models are increasingly valuable for distilling these recordings into compact and interpretable representations. Here we propose a new approach to neural data analysis that leverages advances in conditional generative modeling to enable the unsupervised inference of disentangled behavioral variables from recorded neural activity. Our approach builds on InfoDiffusion, which augments diffusion models with a set of latent variables that capture important factors of variation in the data. We apply our model, called Generating Neural Observations Conditioned on Codes with High Information (GNOCCHI), to time series neural data and test its application to synthetic and biological recordings of neural activity during reaching. In comparison to a VAE-based sequential autoencoder, GNOCCHI learns higher-quality latent spaces that are more clearly structured and more disentangled with respect to key behavioral variables. These properties enable accurate generation of novel samples (unseen behavioral conditions) through simple linear traversal of the latent spaces produced by GNOCCHI. Our work demonstrates the potential of unsupervised, information-based models for the discovery of interpretable latent spaces from neural data, enabling researchers to generate high-quality samples from unseen conditions.
FPGA Deployment of LFADS for Real-time Neuroscience Experiments
Liu, Xiaohan, Chen, ChiJui, Huang, YanLun, Yang, LingChi, Khoda, Elham E, Chen, Yihui, Hauck, Scott, Hsu, Shih-Chieh, Lai, Bo-Cheng
Large-scale recordings of neural activity are providing new opportunities to study neural population dynamics. A powerful method for analyzing such high-dimensional measurements is to deploy an algorithm to learn the low-dimensional latent dynamics. LFADS (Latent Factor Analysis via Dynamical Systems) is a deep learning method for inferring latent dynamics from high-dimensional neural spiking data recorded simultaneously in single trials. This method has shown a remarkable performance in modeling complex brain signals with an average inference latency in milliseconds. As our capacity of simultaneously recording many neurons is increasing exponentially, it is becoming crucial to build capacity for deploying low-latency inference of the computing algorithms. To improve the real-time processing ability of LFADS, we introduce an efficient implementation of the LFADS models onto Field Programmable Gate Arrays (FPGA). Our implementation shows an inference latency of 41.97 $\mu$s for processing the data in a single trial on a Xilinx U55C.
- North America > United States > California > San Francisco County > San Francisco (0.16)
- Asia > Taiwan (0.05)
lfads-torch: A modular and extensible implementation of latent factor analysis via dynamical systems
Sedler, Andrew R., Pandarinath, Chethan
Latent factor analysis via dynamical systems (LFADS) is an RNN-based variational sequential autoencoder that achieves state-of-the-art performance in denoising high-dimensional neural activity for downstream applications in science and engineering. Recently introduced variants and extensions continue to demonstrate the applicability of the architecture to a wide variety of problems in neuroscience. Since the development of the original implementation of LFADS, new technologies have emerged that use dynamic computation graphs, minimize boilerplate code, compose model configuration files, and simplify large-scale training. Building on these modern Python libraries, we introduce lfads-torch -- a new open-source implementation of LFADS that unifies existing variants and is designed to be easier to understand, configure, and extend. Documentation, source code, and issue tracking are available at https://github.com/arsedler9/lfads-torch .
Neural Latent Aligner: Cross-trial Alignment for Learning Representations of Complex, Naturalistic Neural Data
Cho, Cheol Jun, Chang, Edward F., Anumanchipalli, Gopala K.
Understanding the neural implementation of complex human behaviors is one of the major goals in neuroscience. To this end, it is crucial to find a true representation of the neural data, which is challenging due to the high complexity of behaviors and the low signal-to-ratio (SNR) of the signals. Here, we propose a novel unsupervised learning framework, Neural Latent Aligner (NLA), to find well-constrained, behaviorally relevant neural representations of complex behaviors. The key idea is to align representations across repeated trials to learn cross-trial consistent information. Furthermore, we propose a novel, fully differentiable time warping model (TWM) to resolve the temporal misalignment of trials. When applied to intracranial electrocorticography (ECoG) of natural speaking, our model learns better representations for decoding behaviors than the baseline models, especially in lower dimensional space. The TWM is empirically validated by measuring behavioral coherence between aligned trials. The proposed framework learns more cross-trial consistent representations than the baselines, and when visualized, the manifold reveals shared neural trajectories across trials.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
Representation learning for neural population activity with Neural Data Transformers
Ye, Joel, Pandarinath, Chethan
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDT's ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Italy > Tuscany > Florence (0.04)