koopman
Conformal Online Learning of Deep Koopman Linear Embeddings
Gao, Ben, Patracone, Jordan, Chrétien, Stéphane, Alata, Olivier
We introduce Conformal Online Learning of Koopman embeddings (COLoKe), a novel framework for adaptively updating Koopman-invariant representations of nonlinear dynamical systems from streaming data. Our modeling approach combines deep feature learning with multistep prediction consistency in the lifted space, where the dynamics evolve linearly. To prevent overfitting, COLoKe employs a conformal-style mechanism that shifts the focus from evaluating the conformity of new states to assessing the consistency of the current Koopman model. Updates are triggered only when the current model's prediction error exceeds a dynamically calibrated threshold, allowing selective refinement of the Koopman operator and embedding. Empirical results on benchmark dynamical systems demonstrate the effectiveness of COLoKe in maintaining long-term predictive accuracy while significantly reducing unnecessary updates and avoiding overfitting.
- Asia > Middle East > Jordan (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
FRIREN: Beyond Trajectories -- A Spectral Lens on Time
Long-term time-series forecasting (LTSF) models are often presented as general-purpose solutions that can be applied across domains, implicitly assuming that all data is pointwise predictable. Using chaotic systems such as Lorenz-63 as a case study, we argue that geometric structure - not pointwise prediction - is the right abstraction for a dynamic-agnostic foundational model. Minimizing the Wasserstein-2 distance (W2), which captures geometric changes, and providing a spectral view of dynamics are essential for long-horizon forecasting. Our model, FRIREN (Flow-inspired Representations via Interpretable Eigen-networks), implements an augmented normalizing-flow block that embeds data into a normally distributed latent representation. It then generates a W2-efficient optimal path that can be decomposed into rotation, scaling, inverse rotation, and translation. This architecture yields locally generated, geometry-preserving predictions that are independent of the underlying dynamics, and a global spectral representation that functions as a finite Koopman operator with a small modification. This enables practitioners to identify which modes grow, decay, or oscillate, both locally and system-wide. FRIREN achieves an MSE of 11.4, MAE of 1.6, and SWD of 0.96 on Lorenz-63 in a 336-in, 336-out, dt=0.01 setting, surpassing TimeMixer (MSE 27.3, MAE 2.8, SWD 2.1). The model maintains effective prediction for 274 out of 336 steps, approximately 2.5 Lyapunov times. On Rossler (96-in, 336-out), FRIREN achieves an MSE of 0.0349, MAE of 0.0953, and SWD of 0.0170, outperforming TimeMixer's MSE of 4.3988, MAE of 0.886, and SWD of 3.2065. FRIREN is also competitive on standard LTSF datasets such as ETT and Weather. By connecting modern generative flows with classical spectral analysis, FRIREN makes long-term forecasting both accurate and interpretable, setting a new benchmark for LTSF model design.
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Germany (0.04)
- Asia > China (0.04)
- Transportation > Air (0.46)
- Media (0.46)
On the Generalisation of Koopman Representations for Chaotic System Control
Hjikakou, Kyriakos, Cartagena, Juan Diego Cardenas, Sabatelli, Matthia
This paper investigates the generalisability of Koopman-based representations for chaotic dynamical systems, focusing on their transferability across prediction and control tasks. Using the Lorenz system as a testbed, we propose a three-stage methodology: learning Koopman embeddings through autoencoding, pre-training a transformer on next-state prediction, and fine-tuning for safety-critical control. Our results show that Koopman embeddings outperform both standard and physics-informed PCA baselines, achieving accurate and data-efficient performance. Notably, fixing the pre-trained transformer weights during fine-tuning leads to no performance degradation, indicating that the learned representations capture reusable dynamical structure rather than task-specific patterns. These findings support the use of Koopman embeddings as a foundation for multi-task learning in physics-informed machine learning. A project page is available at https://kikisprdx.github.io/.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Netherlands > Groningen (0.04)
- Europe > Belgium > Wallonia > Liège Province > Liège (0.04)
Learning Koopman Dynamics for Safe Legged Locomotion with Reinforcement Learning-based Controller
Kim, Jeonghwan, Han, Yunhai, Ravichandar, Harish, Ha, Sehoon
-- Learning-based algorithms have demonstrated impressive performance in agile locomotion of legged robots. However, learned policies are often complex and opaque due to the black-box nature of learning algorithms, which hinders predictability and precludes guarantees on performance or safety. In this work, we develop a novel safe navigation framework that combines Koopman operators and model-predictive control (MPC) frameworks. Our method adopts Koopman operator theory to learn the linear evolution of dynamics of the underlying locomotion policy, which can be effectively learned with Dynamic Mode Decomposition (DMD). Given that our learned model is linear, we can readily leverage the standard MPC algorithm. Our framework is easy to implement with less prior knowledge because it does not require access to the underlying dynamical systems or control-theoretic techniques. We demonstrate that the learned linear dynamics can better predict the trajectories of legged robots than baselines. In addition, we showcase that the proposed navigation framework can achieve better safety with less collisions in challenging and dense environments with narrow passages. I. INTRODUCTION Recent advances in reinforcement learning have led to significant improvements in robust and agile quadrupedal locomotion [1]-[6].
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.93)
- Information Technology > Artificial Intelligence > Robots > Locomotion (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
Koopman Learning with Episodic Memory
Redman, William T., Huang, Dean, Fonoberova, Maria, Mezić, Igor
Koopman operator theory, a data-driven dynamical systems framework, has found significant success in learning models from complex, real-world data sets, enabling state-of-the-art prediction and control. The greater interpretability and lower computational costs of these models, compared to traditional machine learning methodologies, make Koopman learning an especially appealing approach. Despite this, little work has been performed on endowing Koopman learning with the ability to learn from its own mistakes. To address this, we equip Koopman methods - developed for predicting non-stationary time-series - with an episodic memory mechanism, enabling global recall of (or attention to) periods in time where similar dynamics previously occurred. We find that a basic implementation of Koopman learning with episodic memory leads to significant improvements in prediction on synthetic and real-world data. Our framework has considerable potential for expansion, allowing for future advances, and opens exciting new directions for Koopman learning.
- North America > United States > District of Columbia > Washington (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Health & Medicine > Consumer Health (0.84)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.32)
- Health & Medicine > Therapeutic Area > Immunology (0.32)
It's a Weird Time for Driverless Cars
The robotaxi is recording me sitting in the backseat, and I am recording it. Someone in the neighboring car is recording us both. It's an unusually hot day in San Francisco, and I am in a self-driving car named Charcuterie, operated by Cruise. Next to me is William Riggs, a professor at the University of San Francisco who studies self-driving cars. The front seats are both empty, and the wheel silently shifts as the car maneuvers itself along a thoroughfare next to Golden Gate Park.
- North America > United States > California > San Francisco County > San Francisco (0.52)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay > Golden Gate (0.25)
- North America > United States > Texas > Travis County > Austin (0.05)
- (3 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Tesla's Recall of Full Self-Driving Targets a 'Fundamental' Flaw
After years selling its controversial Full-Self Driving software upgrade for thousands of dollars, Tesla today issued a recall for every one of the nearly 363,000 vehicles using the feature. The move was prompted by a US government agency saying the software had in "rare circumstances" put drivers in danger and could increase the risk of a crash in everyday situations. Recalls are common in the auto industry and mostly target particular parts or road situations. Tesla's latest recall is sweeping, with the National Highway Traffic Safety Administration saying the Full Self-Driving software can break local traffic laws and act in a way the driver doesn't expect in a grab bag of road situations. According to the agency's filing, those include driving through a yellow light on the verge of turning red; not properly stopping at a stop sign; speeding, due to failing to detect a road sign or because the driver has set their car to default to a faster speed; and making unexpected lane changes to move out of turn-only lanes when going straight through an intersection.
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Government > Regional Government > North America Government > United States Government (0.59)
Strong Gravitational Lensing Parameter Estimation with Vision Transformer
Huang, Kuan-Wei, Chen, Geoff Chih-Fan, Chang, Po-Wen, Lin, Sheng-Chieh, Hsu, Chia-Jung, Thengane, Vishal, Lin, Joshua Yao-Yu
Quantifying the parameters and corresponding uncertainties of hundreds of strongly lensed quasar systems holds the key to resolving one of the most important scientific questions: the Hubble constant ($H_{0}$) tension. The commonly used Markov chain Monte Carlo (MCMC) method has been too time-consuming to achieve this goal, yet recent work has shown that convolution neural networks (CNNs) can be an alternative with seven orders of magnitude improvement in speed. With 31,200 simulated strongly lensed quasar images, we explore the usage of Vision Transformer (ViT) for simulated strong gravitational lensing for the first time. We show that ViT could reach competitive results compared with CNNs, and is specifically good at some lensing parameters, including the most important mass-related parameters such as the center of lens $\theta_{1}$ and $\theta_{2}$, the ellipticities $e_1$ and $e_2$, and the radial power-law slope $\gamma'$. With this promising preliminary result, we believe the ViT (or attention-based) network architecture can be an important tool for strong lensing science for the next generation of surveys. The open source of our code and data is in \url{https://github.com/kuanweih/strong_lensing_vit_resnet}.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
How self-driving cars got stuck in the slow lane
"I would be shocked if we do not achieve full self-driving safer than a human this year," said Tesla chief executive, Elon Musk, in January. For anyone who follows Musk's commentary, this might sound familiar. In 2020, he promised autonomous cars the same year, saying: "There are no fundamental challenges." In 2019, he promised Teslas would be able to drive themselves by 2020 – converting into a fleet of 1m "robotaxis". He has made similar predictions every year going back to 2014.
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)