Look Here: Vision Transformers with Directed Attention Generalize and Extrapolate
–Neural Information Processing Systems
High-resolution images offer more information about scenes that can improve model accuracy. However, the dominant model architecture in computer vision, the vision transformer (ViT), cannot effectively leverage larger images without finetuning -- ViTs poorly extrapolate to more patches at test time, although transformers offer sequence length flexibility. We attribute this shortcoming to the current patch position encoding methods, which create a distribution shift when extrapolating. We propose a drop-in replacement for the position encoding of plain ViTs that restricts attention heads to fixed fields of view, pointed in different directions, using 2D attention masks. Our novel method, called LookHere, provides translationequivariance, ensures attention head diversity, and limits the distribution shift that attention heads face when extrapolating. We demonstrate that LookHere improves performance on classification (avg.
Neural Information Processing Systems
Mar-19-2025, 01:19:39 GMT
- Country:
- North America > Canada > Ontario > National Capital Region > Ottawa (0.14)
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Information Technology (0.93)
- Technology: