Goto

Collaborating Authors

 Comoros


Effective Dynamics and Transition Pathways from Koopman-Inspired Neural Learning of Collective Variables

Sikorski, Alexander, Donati, Luca, Weber, Marcus, Schütte, Christof

arXiv.org Machine Learning

The ISOKANN (Invariant Subspaces of Koopman Operators Learned by Artificial Neural Networks) framework provides a data-driven route to extract collective variables (CVs) and effective dynamics from complex molecular systems. In this work, we integrate the theoretical foundation of Koopman operators with Krylov-like subspace algorithms, and reduced dynamical modeling to build a coherent picture of how to describe metastable transitions in high-dimensional systems based on CVs. Starting from the identification of CVs based on dominant invariant subspaces, we derive the corresponding effective dynamics on the latent space and connect these to transition rates and times, committor functions, and transition pathways. The combination of Koopman-based learning and reduced-dimensional effective dynamics yields a principled framework for computing transition rates and pathways from simulation data. Numerical experiments on one-, two-, and three-dimensional benchmark potentials illustrate the ability of ISOKANN to reconstruct the coarse-grained kinetics and reproduce transition times across enthalpic and entropic barriers.



Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations

Neural Information Processing Systems

Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently.






VariationalInferenceforContinuous-Time SwitchingDynamicalSystems

Neural Information Processing Systems

Since many areas, such as biology or discrete-event systems, are naturally described in continuous time, we present a model based on a Markov jumpprocessmodulating asubordinated diffusionprocess. Weprovidetheexact evolution equations fortheprior andposterior marginal densities, thedirect solutions of which are however computationally intractable.



Cutting Through the Noise: On-the-fly Outlier Detection for Robust Training of Machine Learning Interatomic Potentials

Lam, Terry C. W., O'Neill, Niamh, Schran, Christoph, Schaaf, Lars L.

arXiv.org Machine Learning

The accuracy of machine learning interatomic potentials suffers from reference data that contains numerical noise. Often originating from unconverged or inconsistent electronic-structure calculations, this noise is challenging to identify. Existing mitigation strategies such as manual filtering or iterative refinement of outliers, require either substantial expert effort or multiple expensive retraining cycles, making them difficult to scale to large datasets. Here, we introduce an on-the-fly outlier detection scheme that automatically down-weights noisy samples, without requiring additional reference calculations. By tracking the loss distribution via an exponential moving average, this unsupervised method identifies outliers throughout a single training run. We show that this approach prevents overfitting and matches the performance of iterative refinement baselines with significantly reduced overhead. The method's effectiveness is demonstrated by recovering accurate physical observables for liquid water from unconverged reference data, including diffusion coefficients. Furthermore, we validate its scalability by training a foundation model for organic chemistry on the SPICE dataset, where it reduces energy errors by a factor of three. This framework provides a simple, automated solution for training robust models on imperfect datasets across dataset sizes.