Goto

Collaborating Authors

 chaos


Learning to Emulate Chaos: Adversarial Optimal Transport Regularization

Melo, Gabriel, Santiago, Leonardo, Lu, Peter Y.

arXiv.org Machine Learning

Chaos arises in many complex dynamical systems, from weather to power grids, but is difficult to accurately model using data-driven emulators, including neural operator architectures. For chaotic systems, the inherent sensitivity to initial conditions makes exact long-term forecasts theoretically infeasible, meaning that traditional squared-error losses often fail when trained on noisy data. Recent work has focused on training emulators to match the statistical properties of chaotic attractors by introducing regularization based on handcrafted local features and summary statistics, as well as learned statistics extracted from a diverse dataset of trajectories. In this work, we propose a family of adversarial optimal transport objectives that jointly learn high-quality summary statistics and a physically consistent emulator. We theoretically analyze and experimentally validate a Sinkhorn divergence formulation (2-Wasserstein) and a WGAN-style dual formulation (1-Wasserstein). Our experiments across a variety of chaotic systems, including systems with high-dimensional chaotic attractors, show that emulators trained with our approach exhibit significantly improved long-term statistical fidelity.


Improved Particle Approximation Error for Mean Field Neural Networks

Neural Information Processing Systems

Recent works (Chen et al., 2022; Suzuki et al., 2023b) have demonstrated In this work, we improve the dependence on logarithmic Sobolev inequality (LSI) constants in their particle approximation errors which can exponentially deteriorate with the regularization coefficient. One may consider adding Gaussian noise to the gradient descent to make the method more stable.



OnScramblingPhenomena forRandomlyInitializedRecurrentNetworks

Neural Information Processing Systems

Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics, and their sensitivity to the initialization process often renders them notoriously hardtotrain.






0d9057d84a9fc37523bf826232ea6820-Paper-Conference.pdf

Neural Information Processing Systems

In the case of coupled skew tent maps, theproposedmethodconsistently outperforms afivelayerDeepNeuralNetwork (DNN) and Long Short Term Memory (LSTM) architecture for unidirectional coupling coefficient values ranging from0.1 to 0.7.