Goto

Collaborating Authors

 state estimate


A joint optimization approach to identifying sparse dynamics using least squares kernel collocation

Hsu, Alexander W., Salas, Ike W. Griss, Stevens-Haas, Jacob M., Kutz, J. Nathan, Aravkin, Aleksandr, Hosseini, Bamdad

arXiv.org Machine Learning

The identification of ordinary differential equations (ODEs) and dynamical systems is a fundamental problem in control [32, 59, 60], data assimilation [42, 84], and more recently in scientific machine learning (ML) [11, 72, 74]. While algorithms such as Sparse Identification of Nonlinear Dynamics (SINDy) and its variants [46] are widely used by practitioners, they often fail in scenarios where observations of the state of the system are scarce, indirect, and noisy. In such scenarios modifications to SINDy-type methods are required to enforce additional constraints on the recovered equations to make them consistent with the observational data. Put simply, traditional SINDy-type methods work in two steps: (1) the data is used to filter the state of the system and estimate the derivatives, and (2) the filtered state is used to learn the underlying dynamics. In the regime of scarce, noisy and incomplete data, step 1 is inaccurate, which can propagate to poor results in the subsequent step 2. In this paper, we propose an all-at-once approach to filtering and equation learning based on collocation in a reproducing kernel Hilbert space (RKHS) which we term Joint SINDy (JSINDy), and shows that the issues above can be mitigated by performing both steps together. This joins a broader class of dynamics-informed methods that integrate the governing equations directly into the learning objective, either as hard constraints or as least-squares relaxations, which couples the problems of state estimation and model discovery. Representative examples include physics-informed and sparse-regression frameworks based on neural networks, splines, kernels, finite differences, and adjoint methods [21, 27, 39, 41, 72, 73, 88].


Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise

Neural Information Processing Systems

A pivotal brain computation relies on the ability to sustain perception-action loops. Stochastic optimal control theory offers a mathematical framework to explain these processes at the algorithmic level through optimality principles.



Filtering Jump Markov Systems with Partially Known Dynamics: A Model-Based Deep Learning Approach

Stamatelis, George, Alexandropoulos, George C.

arXiv.org Artificial Intelligence

Abstract--This paper presents the Jump Markov Filtering Network (JMFNet), a novel model-based deep learning framework for real-time state-state estimation in jump Markov systems with unknown noise statistics and mode transition dynamics. A hybrid architecture comprising two Recurrent Neural Networks (RNNs) is proposed: one for mode prediction and another for filtering that is based on a mode-augmented version of the recently presented KalmanNet architecture. The proposed RNNs are trained jointly using an alternating least squares strategy that enables mutual adaptation without supervision of the latent modes. Extensive numerical experiments on linear and nonlinear systems, including target tracking, pendulum angle tracking, Lorenz attractor dynamics, and a real-life dataset demonstrate that the proposed JMFNet framework outperforms classical model-based filters (e.g., interacting multiple models and particle filters) as well as model-free deep learning baselines, particularly in non-stationary and high-noise regimes. It is also showcased that JMFNet achieves a small yet meaningful improvement over the KalmanNet framework, which becomes much more pronounced in complicated systems or long trajectories. Finally, the method's performance is empirically validated to be consistent and reliable, exhibiting low sensitivity to initial conditions, hyperparameter selection, as well as to incorrect model knowledge. Index T erms--Kalman filter, jump Markov system, switching processes, state-space model, model-based deep learning. The Kalman Filter (KF) [1], along with its extensions including the Extended KF (EKF) [2], are among the most well known and widely used algorithms in the signal processing community, having an extensive range of applications. In fact, despite being developed over 50 years ago, KFs remain fundamental tools for engineering practitioners [3].


T-ESKF: Transformed Error-State Kalman Filter for Consistent Visual-Inertial Navigation

Tian, Chungeng, Hao, Ning, He, Fenghua

arXiv.org Artificial Intelligence

This paper presents a novel approach to address the inconsistency problem caused by observability mismatch in visual-inertial navigation systems (VINS). The key idea involves applying a linear time-varying transformation to the error-state within the Error-State Kalman Filter (ESKF). This transformation ensures that \textrr{the unobservable subspace of the transformed error-state system} becomes independent of the state, thereby preserving the correct observability of the transformed system against variations in linearization points. We introduce the Transformed ESKF (T-ESKF), a consistent VINS estimator that performs state estimation using the transformed error-state system. Furthermore, we develop an efficient propagation technique to accelerate the covariance propagation based on the transformation relationship between the transition and accumulated matrices of T-ESKF and ESKF. We validate the proposed method through extensive simulations and experiments, demonstrating better (or competitive at least) performance compared to state-of-the-art methods. The code is available at github.com/HITCSC/T-ESKF.


Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise

Neural Information Processing Systems

A pivotal brain computation relies on the ability to sustain perception-action loops. Stochastic optimal control theory offers a mathematical framework to explain these processes at the algorithmic level through optimality principles.


Remarks on stochastic cloning and delayed-state filtering

Mina, Tara, Marinello, Lindsey, Christian, John

arXiv.org Artificial Intelligence

Many estimation problems in robotics and navigation involve measurements that depend on prior states. A prominent example is odometry, which measures the relative change between states over time. Accurately handling these delayed-state measurements requires capturing their correlations with prior state estimates, and a widely used approach is stochastic cloning (SC), which augments the state vector to account for these correlations. This work revisits a long-established but often overlooked alternative--the delayed-state Kalman filter--and demonstrates that a properly derived filter yields exactly the same state and covariance update as SC, without requiring state augmentation. Moreover, the generalized Kalman filter formulation provides computational advantages, while also reducing memory requirements for higher-dimensional states. Our findings clarify a common misconception that Kalman filter variants are inherently unable to handle correlated delayed-state measurements, demonstrating that an alternative formulation achieves the same results more efficiently.


Swarming Without an Anchor (SWA): Robot Swarms Adapt Better to Localization Dropouts Then a Single Robot

Horyna, Jiri, Jung, Roland, Weiss, Stephan, Ferrante, Eliseo, Saska, Martin

arXiv.org Artificial Intelligence

--In this paper, we present the Swarming Without an Anchor (SW A) approach to state estimation in swarms of Unmanned Aerial V ehicles (UA Vs) experiencing ego-localization dropout, where individual agents are laterally stabilized using relative information only. We propose to fuse decentralized state estimation with robust mutual perception and onboard sensor data to maintain accurate state awareness despite intermittent localization failures. Thus, the relative information used to estimate the lateral state of UA Vs enables the identification of the unambiguous state of UA Vs with respect to the local constellation. The resulting behavior reaches velocity consensus, as this task can be referred to as the double integrator synchronization problem. All disturbances and performance degradations except a uniform translation drift of the swarm as a whole is attenuated which is enabling new opportunities in using tight cooperation for increasing reliability and resilience of multi-UA V systems. Simulations and real-world experiments validate the effectiveness of our approach, demonstrating its capability to sustain cohesive swarm behavior in challenging conditions of unreliable or unavailable primary localization. A V swarms enhance mission capabilities by leveraging cooperative behavior to perform tasks more efficiently than single UA Vs [1]-[7].



Recursive KalmanNet: Deep Learning-Augmented Kalman Filtering for State Estimation with Consistent Uncertainty Quantification

Mortada, Hassan, Falcon, Cyril, Kahil, Yanis, Clavaud, Mathéo, Michel, Jean-Philippe

arXiv.org Machine Learning

--State estimation in stochastic dynamical systems with noisy measurements is a challenge. While the Kalman filter is optimal for linear systems with independent Gaussian white noise, real-world conditions often deviate from these assumptions, prompting the rise of data-driven filtering techniques. This paper introduces Recursive KalmanNet, a Kalman-filter-informed recurrent neural network designed for accurate state estimation with consistent error covariance quantification. Experiments with non-Gaussian measurement white noise demonstrate that our model outperforms both the conventional Kalman filter and an existing state-of-the-art deep learning based estimator . The Kalman Filter (KF) [1] provides an optimal estimation of a state vector that evolves according to a linear differential equation, with measurements modeled as a linear combination of the state vector.