Goto

Collaborating Authors

 regularized approach


A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

Neural Information Processing Systems

We propose and study a general framework for regularized Markov decision processes (MDPs) where the goal is to find an optimal policy that maximizes the expected discounted total reward plus a policy regularization term. The extant entropy-regularized MDPs can be cast into our framework. Moreover, under our framework, many regularization terms can bring multi-modality and sparsity, which are potentially useful in reinforcement learning. In particular, we present sufficient and necessary conditions that induce a sparse optimal policy. We also conduct a full mathematical analysis of the proposed regularized MDPs, including the optimality condition, performance error, and sparseness control. We provide a generic method to devise regularization forms and propose off-policy actor critic algorithms in complex environment settings. We empirically analyze the numerical properties of optimal policies and compare the performance of different sparse regularization forms in discrete and continuous environments.


Reviews: A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

Neural Information Processing Systems

Although some techniques are analogous to previous work (which is not bad per se, as it allows to apply more general regularisers within previous frameworks such as soft-actor-critic with small changes only), this work differs significantly from previous work and yields new insights how to obtain sparse policies or not. Claims are supported by proofs and experiments confirm that considering more flexible regularizations can be beneficial in different tasks. There are some issues with the continuous time case, see the section on improvements for details. Further the authors claim that trigonometric and exponential functions families yield multimodal policies (line 287). However, it is not clear to me how this is different to say entropy regularisation, and why a softmax policy cannot have multiple modes (unless of course I parameterize the policy with a single Gaussian in the continuous case, but this is a different issue).


Reviews: A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

Neural Information Processing Systems

The paper contributes valuable insights to understanding the influence of regularization in policy optimization in MDPs. The reviewers reached a consensus about the paper's results having significant merit, and the paper can be accepted for presentation.


A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

Neural Information Processing Systems

We propose and study a general framework for regularized Markov decision processes (MDPs) where the goal is to find an optimal policy that maximizes the expected discounted total reward plus a policy regularization term. The extant entropy-regularized MDPs can be cast into our framework. Moreover, under our framework, many regularization terms can bring multi-modality and sparsity, which are potentially useful in reinforcement learning. In particular, we present sufficient and necessary conditions that induce a sparse optimal policy. We also conduct a full mathematical analysis of the proposed regularized MDPs, including the optimality condition, performance error, and sparseness control. We provide a generic method to devise regularization forms and propose off-policy actor critic algorithms in complex environment settings.


An Analysis of Regularized Approaches for Constrained Machine Learning

Lombardi, Michele, Baldo, Federico, Borghesi, Andrea, Milano, Michela

arXiv.org Artificial Intelligence

Regularization-based approaches for injecting constraints in Machine Learning (ML) were introduced (see e.g. Given the recent interest in ethical and trustworthy AI, however, several works are resorting to these approaches for enforcing desired properties over a ML model (e.g. The regularization function C denotes a vector of (nonnegative) constraint violation indices for m constraints, while λ 0 is a vector of weights (or multipliers). As an example, in a regression problem we may desire a specific output ordering for two input vectors in the training set. If n is even, the term is 0 for perfectly balanced classifications.


A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

Yang, Wenhao, Li, Xiang, Zhang, Zhihua

Neural Information Processing Systems

We propose and study a general framework for regularized Markov decision processes (MDPs) where the goal is to find an optimal policy that maximizes the expected discounted total reward plus a policy regularization term. The extant entropy-regularized MDPs can be cast into our framework. Moreover, under our framework, many regularization terms can bring multi-modality and sparsity, which are potentially useful in reinforcement learning. In particular, we present sufficient and necessary conditions that induce a sparse optimal policy. We also conduct a full mathematical analysis of the proposed regularized MDPs, including the optimality condition, performance error, and sparseness control.