Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints Marc Finzi New York University Ke Alexander Wang Cornell University Andrew Gordon Wilson New York University
–Neural Information Processing Systems
Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constraints of the systems using generalized coordinates, we show that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem. We introduce a series of challenging chaotic and extended-body systems, including systems with N -pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, to push the limits of current approaches. Our experiments show that Cartesian coordinates with explicit constraints lead to a 100x improvement in accuracy and data efficiency. Figure 1: By using Cartesian coordinates with explicit constraints, we simplify the Hamiltonians and La-grangians that our models learn, resulting in better long term predictions and data-efficiency than Neural ODEs and Hamiltonian Neural Networks (HNNs).
Neural Information Processing Systems
Nov-14-2025, 19:37:23 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America
- Canada (0.04)
- United States > New York (0.76)
- Europe > United Kingdom
- Technology: