Learning Inter-Atomic Potentials without Explicit Equivariance
Elhag, Ahmed A., Raja, Arun, Morehead, Alex, Blau, Samuel M., Morris, Garrett M., Bronstein, Michael M.
–arXiv.org Artificial Intelligence
Accurate and scalable machine-learned inter-atomic potentials (MLIPs) are essential for molecular simulations ranging from drug discovery to new material design. Current state-of-the-art models enforce roto-translational symmetries through equivariant neural network architectures, a hard-wired inductive bias that can often lead to reduced flexibility, computational efficiency, and scalability. In this work, we introduce TransIP: Transformer-based Inter-Atomic Potentials, a novel training paradigm for interatomic potentials achieving symmetry compliance without explicit architectural constraints. Our approach guides a generic non-equivariant Transformer-based model to learn SO(3)-equivariance by optimizing its representations in the embedding space. Trained on the recent Open Molecules (OMol25) collection, a large and diverse molecular dataset built specifically for MLIPs and covering different types of molecules (including small organics, biomolecular fragments, and electrolyte-like species), TransIP effectively learns symmetry in its latent space, providing low equivariance error. Further, compared to a data augmentation baseline, TransIP achieves 40% to 60% improvement in performance across varying OMol25 dataset sizes. More broadly, our work shows that learned equivariance can be a powerful and efficient alternative to augmentation-based MLIP models.
arXiv.org Artificial Intelligence
Oct-16-2025
- Country:
- Asia > Singapore (0.04)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.05)
- North America > United States (0.14)
- South America > Chile
- Genre:
- Research Report (1.00)
- Industry:
- Energy (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.34)
- Technology: