Population Transformer: Learning Population-level Representations of Intracranial Activity
Chau, Geeling, Wang, Christopher, Talukder, Sabera, Subramaniam, Vighnesh, Soedarmadji, Saraswati, Yue, Yisong, Katz, Boris, Barbu, Andrei
–arXiv.org Artificial Intelligence
We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data.
arXiv.org Artificial Intelligence
Jun-5-2024
- Country:
- North America > United States (1.00)
- Genre:
- Research Report (0.50)
- Industry:
- Government (0.94)
- Health & Medicine
- Health Care Technology (1.00)
- Therapeutic Area > Neurology (1.00)
- Technology: