Goto

Collaborating Authors

 sonode





A Phase Space Trajectory Proofs Here we present the proofs for the propositions from Section 4, concerning a

Neural Information Processing Systems

Then the time derivative of z (t) is d dt x d x dt ... d Single phase space trajectories can feed into themselves representing periodic motion. Effectively an additional dimension is added to phase space, which is time. This maintains generality and allows NODEs to be used as a component of a larger model. These are the same equations that were derived by Chen et al. The gradients from the positional part and the velocity part are found separately and added.


On Second Order Behaviour in Augmented Neural ODEs

Neural Information Processing Systems

While previous work has mostly been focused on first order ODEs, the dynamics of many systems, especially in classical physics, are governed by second order laws. In this work, we consider Second Order Neural ODEs (SONODEs).


Modulated Neural ODEs

Auzina, Ilze Amanda, Yıldız, Çağatay, Magliacane, Sara, Bethge, Matthias, Gavves, Efstratios

arXiv.org Machine Learning

Neural ordinary differential equations (NODEs) have been proven useful for learning non-linear dynamics of arbitrary trajectories. However, current NODE methods capture variations across trajectories only via the initial state value or by auto-regressive encoder updates. In this work, we introduce Modulated Neural ODEs (MoNODEs), a novel framework that sets apart dynamics states from underlying static factors of variation and improves the existing NODE methods. In particular, we introduce $\textit{time-invariant modulator variables}$ that are learned from the data. We incorporate our proposed framework into four existing NODE variants. We test MoNODE on oscillating systems, videos and human walking trajectories, where each trajectory has trajectory-specific modulation. Our framework consistently improves the existing model ability to generalize to new dynamic parameterizations and to perform far-horizon forecasting. In addition, we verify that the proposed modulator variables are informative of the true unknown factors of variation as measured by $R^2$ scores.


On Second Order Behaviour in Augmented Neural ODEs

Norcliffe, Alexander, Bodnar, Cristian, Day, Ben, Simidjievski, Nikola, Liò, Pietro

arXiv.org Machine Learning

Neural Ordinary Differential Equations (NODEs) are a new class of models that transform data continuously through infinite-depth architectures. The continuous nature of NODEs has made them particularly suitable for learning the dynamics of complex physical systems. While previous work has mostly been focused on first order ODEs, the dynamics of many systems, especially in classical physics, are governed by second order laws. In this work, we consider Second Order Neural ODEs (SONODEs). We show how the adjoint sensitivity method can be extended to SONODEs and prove that the optimisation of a first order coupled ODE is equivalent and computationally more efficient. Furthermore, we extend the theoretical understanding of the broader class of Augmented NODEs (ANODEs) by showing they can also learn higher order dynamics with a minimal number of augmented dimensions, but at the cost of interpretability. This indicates that the advantages of ANODEs go beyond the extra space offered by the augmented dimensions, as originally thought. Finally, we compare SONODEs and ANODEs on synthetic and real dynamical systems and demonstrate that the inductive biases of the former generally result in faster training and better performance.