Action-Driven Processes for Continuous-Time Control

He, Ruimin, Lin, Shaowei

arXiv.org Machine Learning 

Modeling systems that exhibit both continuous and discontinuous state changes presents a significant challenge in machine learning. For instance, biological spiking networks feature the continuous decay of neuron potentials alongside discontinuous spikes, which cause abrupt increases in the potentials of neighboring downstream neurons. Designing appropriate objective functions and applying gradient methods that work with these discontinuities are among the difficulties of working with such systems. Traditionally, ordinary and partial differential equations (ODEs and PDEs) are used to model continuous state changes, while Markov decision processes (MDPs) are employed to capture discrete actions that drive environmental transitions. In this paper, we study Action-Driven Processes (ADPs), also known as generalized semi-Markov processes [12, 5, 16], which unify both types of dynamics within a single framework. With continuous-time states and actions at the core of ADPs, a natural question is whether it is possible to learn optimal policies for action selection using traditional reinforcement learning methods. The control-as-inference tutorial [9] elegantly demonstrated that maximum entropy reinforcement learning can be formulated as minimizing the Kullback-Leibler (KL) divergence between (a) a true trajectory distribution generated by action-state transitions and the policy, and (b) a model trajectory distribution that depends on the reward function.