Goto

Collaborating Authors

Towards a Data Efficient Off-Policy Policy Gradient

AAAI Conferences

The ability to learn from off-policy data -- data generated from past interaction with the environment -- is essential to data efficient reinforcement learning. Recent work has shown that the use of off-policy data not only allows the re-use of data but can even improve performance in comparison to on-policy reinforcement learning. In this work we investigate if a recently proposed method for learning a better data generation policy, commonly called a behavior policy, can also increase the data efficiency of policy gradient reinforcement learning. Empirical results demonstrate that with an appropriately selected behavior policy we can estimate the policy gradient more accurately. The results also motivate further work into developing methods for adapting the behavior policy as the policy we are learning changes.


Is the far-right shaping the EU's migration policy?

Al Jazeera

It's an issue that has divided Europe for years - illegal migration. Almost two million people have risked their lives crossing the Mediterranean Sea since 2014. This movement towards Europe continues to take a devastating toll on human life. Thousands die during their desperate journeys. Several policies on border control and restrictions have seen a decrease in the number of refugees and migrants, but members of the European Union still can't agree what to do with them.


Dutch Students March for Better Climate Policies

U.S. News

Thousands of students are skipping classes to join a march in support of more ambitious climate policies in the Netherlands.


Ciosek

AAAI Conferences

We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates across the action when estimating the gradient, instead of relying only on the action in the sampled trajectory. We establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. We also prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and, for the Gaussian case, with no computational overhead. Finally, we show that it is optimal in a certain sense to explore with a Gaussian policy such that the covariance is proportional to the exponential of the scaled Hessian of the critic with respect to the actions. We present empirical results confirming that this new form of exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic in four challenging MuJoCo domains.


Recycling Scaled Back in Sitka Due to China Policy Change

U.S. News

Sitka's recycling contractor has confirmed that a policy change in China has led to the company no longer being able to accept mixed paper, newspaper or a number of plastics recyclables.