Efficient Online Linear Control with Stochastic Convex Costs and Unknown Dynamics
Cassel, Asaf, Cohen, Alon, Koren, Tomer
Adaptive control, the task of regulating an unknown linear dynamical system, is a classic controltheoretic problem that has been studied extensively since the 1950s [e.g., 8]. Classic results on adaptive control typically pertain to the asymptotic stability and convergence to the optimal controller while contemporary research focuses on regret minimization and finite-time guarantees. In linear control, both the state and action are vectors in Euclidean spaces. At each time step, the controller views the current state of the system, chooses an action, and the system transitions to the next state. The latter is chosen via a linear mapping from the current state and action and is perturbed by zero-mean i.i.d.
Jun-22-2022