dissipation
Supplementary Material Spectrum Gaussian Processes Learning from Noisy and Sparse Data A Derivation of the spectral representation
The ELBO is derived from Jensen's inequality as follows: log p ( Y) ZZZ q ( X, f, w) log p ( Y, X, f, w) q ( X, f, w) d w d f d X (31) = ZZZ p ( f | w) q ( w) The inference procedure of SSGP is shown in Algorithm 1. In the experiments, we set the integration time window =1 . Update the parameters by maximizing the ELBO (13) evaluated using D . In this appendix, we describe baseline models for the experiments in Section 6. D-SymODEN can also apply to the dissipative systems. SympGPR can estimate conservative vector fields from derivative observations by considering Hamiltonian mechanics; we used finite differences for training.
Metriplectic Conditional Flow Matching for Dissipative Dynamics
Metriplectic conditional flow matching (MCFM) learns dissipative dynamics without violating first principles. Neural surrogates often inject energy and destabilize long-horizon rollouts; MCFM instead builds the conservative-dissipative split into both the vector field and a structure preserving sampler. MCFM trains via conditional flow matching on short transitions, avoiding long rollout adjoints. In inference, a Strang-prox scheme alternates a symplectic update with a proximal metric step, ensuring discrete energy decay; an optional projection enforces strict decay when a trusted energy is available. We provide continuous and discrete time guarantees linking this parameterization and sampler to conservation, monotonic dissipation, and stable rollouts. On a controlled mechanical benchmark, MCFM yields phase portraits closer to ground truth and markedly fewer energy-increase and positive energy rate events than an equally expressive unconstrained neural flow, while matching terminal distributional fit.
- North America > United States > New York > Monroe County > Rochester (0.04)
- Europe > Switzerland (0.04)
Unsupervised operator learning approach for dissipative equations via Onsager principle
Chang, Zhipeng, Wen, Zhenye, Zhao, Xiaofei
Existing operator learning methods rely on supervised training with high-fidelity simulation data, introducing significant computational cost. In this work, we propose the deep Onsager operator learning (DOOL) method, a novel unsupervised framework for solving dissipative equations. Rooted in the Onsager variational principle (OVP), DOOL trains a deep operator network by directly minimizing the OVP-defined Rayleighian functional, requiring no labeled data, and then proceeds in time explicitly through conservation/change laws for the solution. Another key innovation here lies in the spatiotemporal decoupling strategy: the operator's trunk network processes spatial coordinates exclusively, thereby enhancing training efficiency, while integrated external time stepping enables temporal extrapolation. Numerical experiments on typical dissipative equations validate the effectiveness of the DOOL method, and systematic comparisons with supervised DeepONet and MIONet demonstrate its enhanced performance. Extensions are made to cover the second-order wave models with dissipation that do not directly follow OVP.
- North America > United States > Kansas > Cowley County (0.05)
- Asia > China > Hubei Province > Wuhan (0.05)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
Quantum state-agnostic work extraction (almost) without dissipation
Lumbreras, Josep, Huang, Ruo Cheng, Hu, Yanglin, Gu, Mile, Tomamichel, Marco
Department of Electrical and Computer Engineering, National University of Singapore (Dated: June 13, 2025) We investigate work extraction protocols designed to transfer the maximum possible energy to a battery using sequential access to N copies of an unknown pure qubit state. The core challenge is designing interactions to optimally balance two competing goals: charging of the battery optimally using the qubit in hand, and acquiring more information by qubit to improve energy harvesting in subsequent rounds. Here, we leverage exploration-exploitation trade-off in reinforcement learning to develop adaptive strategies achieving energy dissipation that scales only poly-logarithmically in N . This represents an exponential improvement over current protocols based on full state tomography. Introduction --Given sequential access to finite, identical samples of an unknown quantum system, what is the optimal strategy for extracting work from them and charging a battery?
- Asia > Singapore (0.25)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Energy > Energy Storage (1.00)
- Electrical Industrial Apparatus (1.00)
The Dissipation Theory of Aging: A Quantitative Analysis Using a Cellular Aging Map
Khodaee, Farhan, Zandie, Rohola, Xia, Yufan, Edelman, Elazer R.
Continuous-time systems are often represented by differential equations, including Ordinary Differential Equations (ODEs) like the motion of a pendulum and Partial Differential Equations (PDEs) such as the heat equation, which describe system behavior in response to time and other variables. For systems that evolve at discrete intervals, difference equations--using linear or nonlinear recursive functions--capture state changes over time, as seen in models of population growth. Dynamical systems can also be described geometrically via phase or state space, where each point represents a system state, and trajectories represent system evolution. Alternatively, vector fields describe time evolution as a flow, mapping system states across time steps, thereby outlining the system's path on its phase space manifold. In physics, it's more common to describe the dynamical systems using Hamiltonian or Lagrangian formalisms, which provide a more structured way of capturing the energy dynamics of a system. In systems where randomness or noise plays a role, stochastic differential equations (SDEs) are used.
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > China (0.04)
Port-Hamiltonian Neural Networks with Output Error Noise Models
Moradi, Sarvin, Beintema, Gerben I., Jaensson, Nick, Tóth, Roland, Schoukens, Maarten
Hamiltonian neural networks (HNNs) represent a promising class of physics-informed deep learning methods that utilize Hamiltonian theory as foundational knowledge within neural networks. However, their direct application to engineering systems is often challenged by practical issues, including the presence of external inputs, dissipation, and noisy measurements. This paper introduces a novel framework that enhances the capabilities of HNNs to address these real-life factors. We integrate port-Hamiltonian theory into the neural network structure, allowing for the inclusion of external inputs and dissipation, while mitigating the impact of measurement noise through an output-error (OE) model structure. The resulting output error port-Hamiltonian neural networks (OE-pHNNs) can be adapted to tackle modeling complex engineering systems with noisy measurements. Furthermore, we propose the identification of OE-pHNNs based on the subspace encoder approach (SUBNET), which efficiently approximates the complete simulation loss using subsections of the data and uses an encoder function to predict initial states. By integrating SUBNET with OE-pHNNs, we achieve consistent models of complex engineering systems under noisy measurements. In addition, we perform a consistency analysis to ensure the reliability of the proposed data-driven model learning method. We demonstrate the effectiveness of our approach on system identification benchmarks, showing its potential as a powerful tool for modeling dynamic systems in real-world applications.
- Europe > Netherlands > North Brabant > Eindhoven (0.05)
- Europe > Hungary > Budapest > Budapest (0.04)
Adjoint-based online learning of two-layer quasi-geostrophic baroclinic turbulence
Yan, Fei Er, Frezat, Hugo, Sommer, Julien Le, Mak, Julian, Otness, Karl
For reasons of computational constraint, most global ocean circulation models used for Earth System Modeling still rely on parameterizations of sub-grid processes, and limitations in these parameterizations affect the modeled ocean circulation and impact on predictive skill. An increasingly popular approach is to leverage machine learning approaches for parameterizations, regressing for a map between the resolved state and missing feedbacks in a fluid system as a supervised learning task. However, the learning is often performed in an `offline' fashion, without involving the underlying fluid dynamical model during the training stage. Here, we explore the `online' approach that involves the fluid dynamical model during the training stage for the learning of baroclinic turbulence and its parameterization, with reference to ocean eddy parameterization. Two online approaches are considered: a full adjoint-based online approach, related to traditional adjoint optimization approaches that require a `differentiable' dynamical model, and an approximately online approach that approximates the adjoint calculation and does not require a differentiable dynamical model. The online approaches are found to be generally more skillful and numerically stable than offline approaches. Others details relating to online training, such as window size, machine learning model set up and designs of the loss functions are detailed to aid in further explorations of the online training methodology for Earth System Modeling.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- (2 more...)