Block Sparse Flash Attention
Ohayon, Daniel, Lamprecht, Itay, Hubara, Itay, Cohen, Israel, Soudry, Daniel, Elata, Noam
–arXiv.org Artificial Intelligence
Modern large language models increasingly require long contexts for reasoning and multi-document tasks, but attention's quadratic complexity creates a severe computational bottleneck. We present Block-Sparse FlashAttention (BSFA), a drop-in replacement that accelerates long-context inference while preserving model quality. Unlike methods that predict importance before computing scores, BSFA computes exact query-key similarities to select the top-k most important value blocks for each query. By comparing per-block maximum scores against calibrated thresholds, we skip approximately 50% of the computation and memory transfers for pruned blocks. Our training-free approach requires only a one-time threshold calibration on a small dataset to learn the per-layer and per-head attention score distributions. We provide a CUDA kernel implementation that can be used as a drop-in replacement for FlashAttention. On Llama-3.1-8B, BSFA achieves up to 1.10x speedup on real-world reasoning benchmarks and up to 1.24x for needle-in-a-haystack retrieval tasks while maintaining above 99% baseline accuracy, with certain configurations even improving accuracy by focusing on the most relevant content, substantially outperforming existing sparse attention methods. The implementation is available at https://github.com/Danielohayon/Block-Sparse-Flash-Attention
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Asia > Middle East > Israel
- Haifa District > Haifa (0.04)
- Tel Aviv District > Tel Aviv (0.04)
- Asia > Middle East > Israel
- Genre:
- Research Report (0.85)
- Technology: