minit
Multiple Instance Neuroimage Transformer
Singla, Ayush, Zhao, Qingyu, Do, Daniel K., Zhou, Yuyin, Pohl, Kilian M., Adeli, Ehsan
For the first time, we propose using a multiple instance learning based convolution-free transformer model, called Multiple Instance Neuroimage Transformer (MINiT), for the classification of T1-weighted (T1w) MRIs. We first present several variants of transformer models adopted for neuroimages. These models extract non-overlapping 3D blocks from the input volume and perform multi-headed self-attention on a sequence of their linear projections. MINiT, on the other hand, treats each of the non-overlapping 3D blocks of the input MRI as its own instance, splitting it further into non-overlapping 3D patches, on which multi-headed self-attention is computed. As a proof-of-concept, we evaluate the efficacy of our model by training it to identify sex from T1w-MRIs of two public datasets: Adolescent Brain Cognitive Development (ABCD) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA). The learned attention maps highlight voxels contributing to identifying sex differences in brain morphometry.
- North America > United States > California > Santa Cruz County > Santa Cruz (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
'Minit' is a delightful introduction to speedrunning
I've never liked rushing through video games. I prefer to take my time, strolling aimlessly through the digital brush and marveling at each beautifully-realized world. There's just one problem: I don't have 100 hours to spend on Monster Hunter World or Assassin's Creed: Origins. Still, when I dive into a game I want to immerse myself and move at a speed that respects the time and effort put in by the developers. That glacial pace means I rarely play the same game twice.