Goto

Collaborating Authors

 cortical activity


Latent Diffusion for Neural Spiking Data

Neural Information Processing Systems

Modern datasets in neuroscience enable unprecedented inquiries into the relationship between complex behaviors and the activity of many simultaneously recorded neurons. While latent variable models can successfully extract low-dimensional embeddings from such recordings, using them to generate realistic spiking data, especially in a behavior-dependent manner, still poses a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS), a diffusion-based generative model with a low-dimensional latent space: LDNS employs an autoencoder with structured state-space (S4) layers to project discrete high-dimensional spiking data into continuous time-aligned latents. On these inferred latents, we train expressive (conditional) diffusion models, enabling us to sample neural activity with realistic single-neuron and population spiking statistics.


Latent Diffusion for Neural Spiking Data

Neural Information Processing Systems

Modern datasets in neuroscience enable unprecedented inquiries into the relationship between complex behaviors and the activity of many simultaneously recorded neurons. While latent variable models can successfully extract low-dimensional embeddings from such recordings, using them to generate realistic spiking data, especially in a behavior-dependent manner, still poses a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS), a diffusion-based generative model with a low-dimensional latent space: LDNS employs an autoencoder with structured state-space (S4) layers to project discrete high-dimensional spiking data into continuous time-aligned latents. On these inferred latents, we train expressive (conditional) diffusion models, enabling us to sample neural activity with realistic single-neuron and population spiking statistics. Next, we demonstrate its flexibility by generating variable-length data that mimics human cortical activity during attempted speech.


Reviews: Selecting causal brain features with a single conditional independence test per feature

Neural Information Processing Systems

Summary: Conditional Independence Testing is an important part of causal structure learning algorithms. However, in the most general case either one has to do a lot of conditional independence tests and/or test by conditioning on a very large number of variables. This work proposes using at most two CI tests per candidate parent involving exactly at most one conditioning variable to filter the real parents of a response variable under certain conditions. This work is interested in identifying direct causes of a Response variable from amongst a set of a candidate parent variables {M_i}. Response variable does not have any observed descendants.


Concept-based explainability for an EEG transformer model

Madsen, Anders Gjølbye, Lehn-Schiøler, William Theodor, Jónsdóttir, Áshildur, Arnardóttir, Bergdís, Hansen, Lars Kai

arXiv.org Artificial Intelligence

Deep learning models are complex due to their size, structure, and inherent randomness in training procedures. Additional complexity arises from the selection of datasets and inductive biases. Addressing these challenges for explainability, Kim et al. (2018) introduced Concept Activation Vectors (CAVs), which aim to understand deep models' internal states in terms of human-aligned concepts. These concepts correspond to directions in latent space, identified using linear discriminants. Although this method was first applied to image classification, it was later adapted to other domains, including natural language processing. In this work, we attempt to apply the method to electroencephalogram (EEG) data for explainability in Kostas et al.'s BENDR (2021), a large-scale transformer model. A crucial part of this endeavor involves defining the explanatory concepts and selecting relevant datasets to ground concepts in the latent space. Our focus is on two mechanisms for EEG concept formation: the use of externally labeled EEG datasets, and the application of anatomically defined concepts. The former approach is a straightforward generalization of methods used in image classification, while the latter is novel and specific to EEG. We present evidence that both approaches to concept formation yield valuable insights into the representations learned by deep EEG models.


Extending the Diagnostic Capabilities of Artificial Intelligence-Based Instructional Systems

Mathan, Santosh (Honeywell Labs) | Yeung, Nick (University of Oxford)

AI Magazine

Active problem solving has been shown to be one of the most effective ways to acquire complex skills. Whether one is learning a programming language by implementing a computer program, or learning calculus by solving problems, context sensitive feedback and guidance are crucial to keeping problem solving efforts fruitful and efficient. This article reviews AI-based algorithms that can diagnose student difficulties during active problem solving and serve as the basis for providing context-sensitive and individualized guidance. The article also describes the crucial role sensor based estimates of cognitive resources such as working memory capacity and attention can play in enhancing the diagnostic capabilities of intelligent instructional systems.