Goto

Collaborating Authors

 append


A Robust SINDy Autoencoder for Noisy Dynamical System Identification

Ding, Kairui

arXiv.org Machine Learning

Sparse identification of nonlinear dynamics (SINDy) has been widely used to discover the governing equations of a dynamical system from data. It uses sparse regression techniques to identify parsimonious models of unknown systems from a library of candidate functions. Therefore, it relies on the assumption that the dynamics are sparsely represented in the coordinate system used. To address this limitation, one seeks a coordinate transformation that provides reduced coordinates capable of reconstructing the original system. Recently, SINDy autoencoders have extended this idea by combining sparse model discovery with autoencoder architectures to learn simplified latent coordinates together with parsimonious governing equations. A central challenge in this framework is robustness to measurement error. Inspired by noise-separating neural network structures, we incorporate a noise-separation module into the SINDy autoencoder architecture, thereby improving robustness and enabling more reliable identification of noisy dynamical systems. Numerical experiments on the Lorenz system show that the proposed method recovers interpretable latent dynamics and accurately estimates the measurement noise from noisy observations.


6 SupplementaryMaterial

Neural Information Processing Systems

The original CLUTRR data generation framework made sure that each testproof is not in the training set in order to test whether a model is able to generalize to unseen proofs. Initial results on the original CLUTRR test sets resulted in strong model performance ( 99%) on levels seen during training (2, 4, 6) but no generalization at all ( 0%) to other levels. The models are given as input " [story] [query] " and asked to generate the proof and answer. Models are trained on levels2,4,6only. In our case, the entity names are important to evaluate systematic generalization.






e8258e5140317ff36c7f8225a3bf9590-Supplemental.pdf

Neural Information Processing Systems

The original MuZero did not use sticky actions (Machado et al., 2017) (a 25% chance that the selected action is ignored and that instead the previous action is repeated) for Atari experiments. For all experiments in this work we used a network architecture based on the one introduced by MuZero(Schrittwieser etal.,2020), To implement the network, we used the modules provided by the Haiku neural network library (Henniganetal.,2020). We did not observe any benefit from using a Gaussian mixture, so instead inallourexperiments weusedasingle Gaussian withdiagonal covariance. All experiments used the Adam optimiser (Kingma & Ba, 2015) with decoupled weight decay (Loshchilov & Hutter, 2017) for training.




SupplementaryMaterialsforHouseofCans: Covert TransmissionofInternalDatasetsviaCapacity-Aware NeuronSteganography

Neural Information Processing Systems

However, considering the ever-evolving paradigms in deep learning, employees with ulterior motivesmay fabricate reasons such asthe requirements ofdata augmentation [6]orthe purpose of multimodal learning [3] to apply for relevant and irrelevant private datasets, which is common in social engineering [4].