Chklovskii, Dmitri
Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution
Zador, Anthony, Escola, Sean, Richards, Blake, Ölveczky, Bence, Bengio, Yoshua, Boahen, Kwabena, Botvinick, Matthew, Chklovskii, Dmitri, Churchland, Anne, Clopath, Claudia, DiCarlo, James, Ganguli, Surya, Hawkins, Jeff, Koerding, Konrad, Koulakov, Alexei, LeCun, Yann, Lillicrap, Timothy, Marblestone, Adam, Olshausen, Bruno, Pouget, Alexandre, Savin, Cristina, Sejnowski, Terrence, Simoncelli, Eero, Solla, Sara, Sussillo, David, Tolias, Andreas S., Tsao, Doris
This implies that the bulk of the work in developing general AI can be achieved by building systems that match the perceptual and motor abilities of animals and that the subsequent step to human-level intelligence would be considerably smaller. This is good news because progress on the first goal can rely on the favored subjects of neuroscience research - rats, mice, and non-human primates - for which extensive and rapidly expanding behavioral and neural datasets can guide the way. Thus, we believe that the NeuroAI path will lead to necessary advances if we figure out the core capabilities that all animals possess in embodied sensorimotor interaction with the world. NeuroAI Grand Challenge: The Embodied Turing Test In 1950, Alan Turing proposed the "imitation game" as a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human
A Neural Network for Semi-Supervised Learning on Manifolds
Genkin, Alexander, Sengupta, Anirvan M., Chklovskii, Dmitri
Semi-supervised learning algorithms typically construct a weighted graph of data points to represent a manifold. However, an explicit graph representation is problematic for neural networks operating in the online setting. Here, we propose a feed-forward neural network capable of semi-supervised learning on manifolds without using an explicit graph representation. Our algorithm uses channels that represent localities on the manifold such that correlations between channels represent manifold structure. The proposed neural network has two layers. The first layer learns to build a representation of low-dimensional manifolds in the input data as proposed recently in [8]. The second learns to classify data using both occasional supervision and similarity of the manifold representation of the data. The channel carrying label information for the second layer is assumed to be "silent" most of the time. Learning in both layers is Hebbian, making our network design biologically plausible. We experimentally demonstrate the effect of semi-supervised learning on non-trivial manifolds.
Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks
Sengupta, Anirvan, Pehlevan, Cengiz, Tepper, Mariano, Genkin, Alexander, Chklovskii, Dmitri
Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling localized receptive fields. More generally, we show analytically that, for data lying on symmetric manifolds, optimal solutions of objectives, from which similarity-preserving networks are derived, have localized receptive fields. Therefore, nonnegative similarity-preserving mapping (NSM) implemented by neural networks can model representations of continuous manifolds in the brain.
OnACID: Online Analysis of Calcium Imaging Data in Real Time
Giovannucci, Andrea, Friedrich, Johannes, Kaufman, Matt, Churchland, Anne, Chklovskii, Dmitri, Paninski, Liam, Pnevmatikakis, Eftychios A.
Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach.
A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks
Pehlevan, Cengiz, Chklovskii, Dmitri
To make sense of the world our brains must analyze high-dimensional datasets streamed by our sensory organs. Because such analysis begins with dimensionality reduction, modelling early sensory processing requires biologically plausible online dimensionality reduction algorithms. Recently, we derived such an algorithm, termed similarity matching, from a Multidimensional Scaling (MDS) objective function. However, in the existing algorithm, the number of output dimensions is set a priori by the number of output neurons and cannot be changed. Because the number of informative dimensions in sensory inputs is variable there is a need for adaptive dimensionality reduction. Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix. We formulate three objective functions which, in the offline setting, are optimized by the projections of the input dataset onto its principal subspace scaled by the eigenvalues of the output covariance matrix. In turn, the output eigenvalues are computed as i) soft-thresholded, ii) hard-thresholded, iii) equalized thresholded eigenvalues of the input covariance matrix. In the online setting, we derive the three corresponding adaptive algorithms and map them onto the dynamics of neuronal activity in networks with biologically plausible local learning rules. Remarkably, in the last two networks, neurons are divided into two classes which we identify with principal neurons and interneurons in biological circuits.
Super-resolution using Sparse Representations over Learned Dictionaries: Reconstruction of Brain Structure using Electron Microscopy
Hu, Tao, Nunez-Iglesias, Juan, Vitaladevuni, Shiv, Scheffer, Lou, Xu, Shan, Bolorizadeh, Mehdi, Hess, Harald, Fetter, Richard, Chklovskii, Dmitri
A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.