Collaborating Authors

Learning from brains how to regularize machines

Neural Information Processing Systems

Despite impressive performance on numerous visual tasks, Convolutional Neural Networks (CNNs) --- unlike brains --- are often highly sensitive to small perturbations of their input, e.g. We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity. We presented natural images to mice and measured the responses of thousands of neurons from cortical visual areas. Next, we denoised the notoriously variable neural activity using strong predictive models trained on this large corpus of responses from the mouse visual system, and calculated the representational similarity for millions of pairs of images from the model's predictions. We then used the neural representation similarity to regularize CNNs trained on image classification by penalizing intermediate representations that deviated from neural ones.

Representational Issues in Meta-Learning

AAAI Conferences

To address the problem of algorithm selection for the classification task, we equip a relational case base with new similarity measures that are able to cope with multirelational representations. The proposed approach builds on notions from clustering and is closely related to ideas developed in similarity-based relational learning. The results provide evidence that the relational representation coupled with the appropriate similarity measure can improve performance. The ideas presented are pertinent not only for meta-learning representational issues, but for all domains with similar representation requirements.

Expanding our view of vision

AITopics Original Links

Every time you open your eyes, visual information flows into your brain, which interprets what you're seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique. This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals' brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur. "This method gives you a visualization of'when' and'where' at the same time.

Visualizing Representational Dynamics with Multidimensional Scaling Alignment Artificial Intelligence

Representational similarity analysis (RSA) has been shown to be an effective framework to characterize brainactivity The scarcity of methods to characterize the representational profiles and deep neural network activations as dynamics creates a major barrier to answer interesting representational geometry by computing the pairwise questions such as: how are objects represented in the brain distances of the response patterns as a representational over the time course from early perception to categorical decision dissimilarity matrix (RDM). However, how to properly analyze making, does the object identification or visual categorization and visualize the representational geometry as dynamics follows a hierarchical classification paradigm; do different over the time course from stimulus onset to offset classes of objects merge and branch at different time is not well understood. In this work, we formulated points based on different tasks or recurrence paradigm; are the pipeline to understand representational dynamics these representational dynamics oscillatory or recurrent?

Insights on representational similarity in neural networks with canonical correlation

Neural Information Processing Systems

Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method (Raghu et al, 2017). We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms.