GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers
Miyato, Takeru, Jaeger, Bernhard, Welling, Max, Geiger, Andreas
As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks. However, since existing positional encoding schemes have been initially designed for NLP tasks, their suitability for vision tasks, which typically exhibit different structural properties in their data, is questionable. We argue that existing positional encoding schemes are suboptimal for 3D vision tasks, as they do not respect their underlying 3D geometric structure. Based on this hypothesis, we propose a geometryaware attention mechanism that encodes the geometric structure of tokens as relative transformation determined by the geometric relationship between queries and key-value pairs. By evaluating on multiple novel view synthesis (NVS) datasets in the sparse wide-baseline multi-view setting, we show that our attention, called Geometric Transform Attention (GTA), improves learning efficiency and performance of state-of-the-art transformer-based NVS models without any additional learned parameters and only minor computational overhead. The transformer model (Vaswani et al., 2017), which is composed of a stack of permutation symmetric layers, processes input tokens as a set and lacks direct awareness of the tokens' structural information. Consequently, transformer models are not solely perceptible to the structures of input tokens, such as word order in NLP or 2D positions of image pixels or patches in image processing. A common way to make transformers position-aware is through vector embeddings: in NLP, a typical way is to transform the position values of the word tokens into embedding vectors to be added to input tokens or attention weights (Vaswani et al., 2017; Shaw et al., 2018). While initially designed for NLP, these positional encoding techniques are widely used for 2D and 3D vision tasks today (Wang et al., 2018; Dosovitskiy et al., 2021; Sajjadi et al., 2022b; Du et al., 2023). Here, a natural question arises: "Are existing encoding schemes suitable for tasks with very different geometric structures?". Consider for example 3D vision tasks using multi-view images paired with camera transformations. The 3D Euclidean symmetry behind multi-view images is a more intricate structure than the 1D sequence of words. With the typical vector embedding approach, the model is tasked with uncovering useful camera poses embedded in the tokens and consequently struggles to understand the effect of non-commutative Euclidean transformations.
Oct-16-2023
- Country:
- Europe > Germany
- Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.28)
- Europe > Germany
- Genre:
- Research Report (0.64)
- Technology: