Goto

Collaborating Authors

 state distribution






StabilizingOff-PolicyQ-LearningviaBootstrapping ErrorReduction

Neural Information Processing Systems

One of the primary drivers of the success of machine learning methods in open-world perception settings, such ascomputer vision [19]and NLP [8],has been the ability ofhigh-capacity function approximators, suchasdeepneuralnetworks,tolearngeneralizable modelsfromlargeamountsof data.




State Regularized Policy Optimization on Data with Dynamics Shift

Neural Information Processing Systems

We then demonstrate a lower-bound performance guarantee on policies regularized by the stationary state distribution. In practice, SRPO can be an add-on module to context-based algorithms in both online and offline RL settings.



Agent 1 Agent 2 River Tiles (a) The initial setup with two agents and two river

Neural Information Processing Systems

Agent 1's action is resolved first. Figure 8: An example of Agent 1 using the "clean" action while facing East. The "main" beam extends directly in front of the agent, while two auxiliary A beam stops when it hits a dirty river tile. The Sequential Social Dilemma Games, introduced in Leibo et al. [2017], are a kind of MARL All of these have open source implementations in [Vinitsky et al., 2019]. The cleaning beam is shown in Figure 8a.