MATE: Multi-view Attention for Table Transformer Efficiency
Eisenschlos, Julian Martin, Gor, Maharshi, Müller, Thomas, Cohen, William W.
–arXiv.org Artificial Intelligence
This work presents a sparse-attention Transformer architecture for modeling documents that contain large tables. Tables are ubiquitous on the web, and are rich in information. However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens. Here we propose MATE, a novel Transformer architecture designed to model the structure of web tables. MATE uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. This architecture scales linearly with respect to speed and memory, and can handle documents containing more than 8000 tokens with current accelerators. MATE also has a more appropriate inductive bias for tabular data, and sets a new state-of-the-art for three table reasoning datasets. For HybridQA (Chen et al., 2020b), a dataset that involves large documents containing tables, we improve the best prior result by 19 points.
arXiv.org Artificial Intelligence
Sep-9-2021
- Country:
- Asia (0.93)
- Europe (0.93)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Automobiles & Trucks (0.93)
- Leisure & Entertainment > Sports
- Motorsports > Formula One (0.68)
- Technology: