Fast and Memory-Efficient Exact Attention with IO-Awareness
–Neural Information Processing Systems
Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware-- accounting for reads and writes between levels of GPU memory.
Neural Information Processing Systems
May-31-2025, 00:41:52 GMT
- Country:
- North America
- Mexico > Mexico City (0.14)
- United States (0.67)
- North America
- Genre:
- Research Report (0.67)
- Industry:
- Government > Regional Government (0.45)
- Information Technology (0.93)
- Technology: