V1T: large-scale mouse V1 response prediction using a Vision Transformer
Li, Bryan M., Cornacchia, Isabel M., Rochefort, Nathalie L., Onken, Arno
–arXiv.org Artificial Intelligence
Accurate predictive models of the visual cortex neural response to natural visual stimuli remain a challenge in computational neuroscience. In this work, we introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance. Moreover, we show that the self-attention weights learned by the Transformer correlate with the population receptive fields. Our model thus sets a new benchmark for neural response prediction and can be used jointly with behavioral and neural recordings to reveal meaningful characteristic features of the visual cortex.
arXiv.org Artificial Intelligence
Sep-5-2023
- Country:
- Asia > Japan
- Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Europe > United Kingdom (0.04)
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Oklahoma (0.14)
- Minnesota > Hennepin County
- Asia > Japan
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence