Natively Trainable Sparse Attention for Hierarchical Point Cloud Datasets
Lapautre, Nicolas, Marchenko, Maria, Patiño, Carlos Miguel, Zhou, Xin
–arXiv.org Artificial Intelligence
Unlocking the potential of transformers on datasets of large physical systems depends on overcoming the quadratic scaling of the attention mechanism. This work explores combining the Erwin architecture with the Native Sparse Attention (NSA) mechanism to improve the efficiency and receptive field of transformer models for large-scale physical systems, addressing the challenge of quadratic attention complexity. We adapt the NSA mechanism for non-sequential data, implement the Erwin NSA model, and evaluate it on three datasets from the physical sciences -- cosmology simulations, molecular dynamics, and air pressure modeling -- achieving performance that matches or exceeds that of the original Erwin model. Additionally, we reproduce the experimental results from the Erwin paper to validate their implementation.
arXiv.org Artificial Intelligence
Aug-15-2025
- Country:
- Europe
- Netherlands > North Holland
- Amsterdam (0.05)
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- Netherlands > North Holland
- South America > Chile
- Europe
- Genre:
- Research Report (0.51)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (0.69)
- Natural Language (0.47)
- Vision (0.47)
- Information Technology > Artificial Intelligence