Logarithmic regret for episodic continuous-time linear-quadratic reinforcement learning over a finite-time horizon
Basei, Matteo, Guo, Xin, Hu, Anran, Zhang, Yufei
We study finite-time horizon continuous-time linear-quadratic reinforcement learning problems in an episodic setting, where both the state and control coefficients are unknown to the controller. We first propose a least-squares algorithm based on continuous-time observations and controls, and establish a logarithmic regret bound of order $O((\ln M)(\ln\ln M))$, with $M$ being the number of learning episodes. The analysis consists of two parts: perturbation analysis, which exploits the regularity and robustness of the associated Riccati differential equation; and parameter estimation error, which relies on sub-exponential properties of continuous-time least-squares estimators. We further propose a practically implementable least-squares algorithm based on discrete-time observations and piecewise constant controls, which achieves similar logarithmic regret with an additional term depending explicitly on the time stepsizes used in the algorithm.
May-17-2021
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States
- California (0.14)
- Europe > United Kingdom
- Genre:
- Research Report (0.63)
- Technology: