Goto

Collaborating Authors

 layernorm


A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning

Neural Information Processing Systems

We demonstrate that plasticity loss is pervasive under domain shift in this regime, and that a number of methods developed to resolve it in other settings fail, sometimes even performing worse than applying no intervention at all. In contrast, we find that a class of "regenerative" methods are able to consistently mitigate plasticity loss in a variety of contexts, including in gridworld tasks and








Mixer

Neural Information Processing Systems

Groupingthechannelstogether Token-mixingMLPstake S-dimensionalvectorsasinputs.Every such vector contains values of asingle feature acrossS different spatial locations. In other words, token-mixing MLPs operate by looking at onlyone channel at once. Forstochasticdepth,followingtheoriginal paper [3], we linearly increase the probability of dropping a layer from0.0 Models are fine-tuned at resolution 224 unless mentioned otherwise. We follow the setup of [2].