Well File:


Three-dimensional spike localization and improved motion correction for Neuropixels recordings

Neural Information Processing Systems

Neuropixels (NP) probes are dense linear multi-electrode arrays that have rapidly become essential tools for studying the electrophysiology of large neural populations. Unfortunately, a number of challenges remain in analyzing the large datasets output by these probes. Here we introduce several new methods for extracting useful spiking information from NP probes. First, we use a simple point neuron model, together with a neural-network denoiser, to efficiently map single spikes detected on the probe into three-dimensional localizations. Previous methods localized individual spikes in two dimensions only; we show that the new localization approach is significantly more robust and provides an improved feature set for clustering spikes according to neural identity ("spike sorting"). Next, we denoise the resulting three-dimensional point-cloud representation of the data, and show that the resulting 3D images can be accurately registered over time, leading to improved tracking of time-varying neural activity over the probe, and in turn, crisper estimates of neural clusters over time. Open source code is available at https://github.


Supplementary Material for: Recursive Inference for Variational Autoencoders

Neural Information Processing Systems

IAF: The autoregressive-based flow model for the encoder q(z|x) [5], which has richer expressiveness than VAE's post-Gaussian encoder. The number of flows is chosen from {1, 2, 4, 8}. HF: The Householder flow encoder model that represents the full covariance using the Householder transformation [18]. The number of flows is chosen from {1, 2, 4, 8}. ME: For a baseline comparison, we also consider the same mixture encoder model, but unlike our recursive mixture learning, the model is trained conventionally, end-to-end; all mixture components' parameters are updated simultaneously. The number of mixture components is chosen from {2, 3, 4, 5}. RME: Our proposed recursive mixture encoder model. We vary the number of the components to be added M from {1, 2, 3, 4}, leading to mixture order 2 to 5. In addition, we test our RME model modified to employ the previous Boosted VI's entropy regularization schemes.


Recursive Inference for Variational Autoencoders Minyoung Kim

Neural Information Processing Systems

Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized, resulting in relatively inaccurate posterior approximation compared to instance-wise variational optimization. Recent semi-amortized approaches were proposed to address this drawback; however, their iterative gradient update procedures can be computationally demanding. To address these issues, in this paper we introduce an accurate amortized inference algorithm. We propose a novel recursive mixture estimation algorithm for VAEs that iteratively augments the current mixture with new components so as to maximally reduce the divergence between the variational and the true posteriors. Using the functional gradient approach, we devise an intuitive learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent from the current mixture distribution as possible, thus increasing representational diversity. Compared to recently proposed boosted variational inference (BVI), our method relies on amortized inference in contrast to BVI's non-amortized single optimization instance. A crucial benefit of our approach is that the inference at test time requires a single feed-forward pass through the mixture inference network, making it significantly faster than the semi-amortized approaches. We show that our approach yields higher test data likelihood than the state-of-the-art on several benchmark datasets.


e3844e186e6eb8736e9f53c0c5889527-AuthorFeedback.pdf

Neural Information Processing Systems

We are very grateful to all reviewers for their detailed, insightful, and constructive comments and questions. But we believe that they are very important, and we will pursue them in our ongoing study. Our responses (blue) to reviewers' comments/questions (black/bold/italic) are as follows. We agree that we made somewhat exaggerated claims on the drawbacks of the SAVI Binary MNIST. We will refine our claims, and also refer to these SAVI methods. It turns out that it was our faulty claim.