Goto

Collaborating Authors

 plausible model learn successor representation


A neurally plausible model learns successor representations in partially observable environments

Neural Information Processing Systems

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments. Here, we introduce a neurally plausible model using \emph{distributional successor features}, which builds on the distributed distributional code for the representation and computation of uncertainty, and which allows for efficient value function computation in partially observed environments via the successor representation. We show that distributional successor features can support reinforcement learning in noisy environments in which direct learning of successful policies is infeasible.


Reviews: A neurally plausible model learns successor representations in partially observable environments

Neural Information Processing Systems

I don't have any major technical criticisms of the paper. However, I didn't feel that the experimental results really highlighted the advantages of this approach. Specifically, the authors never compare against any other method for solving POMDPs. I think this is necssary for making a compelling case for this method. Is it more sample efficient, more computationally efficient, more flexible?

  impetus, observable environment, plausible model learn successor representation

Reviews: A neurally plausible model learns successor representations in partially observable environments

Neural Information Processing Systems

This work proposes a neurally plausible approach to reinforcement learning in partially-observed MDPs based on distributional successor features. The approach allows for efficient value function computation as demonstrated empirically. The three expert reviewers were unanimous that this paper should be accepted, and I see no reason to contradict their opinions.


A neurally plausible model learns successor representations in partially observable environments

Neural Information Processing Systems

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments.


A neurally plausible model learns successor representations in partially observable environments

Vértes, Eszter, Sahani, Maneesh

Neural Information Processing Systems

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments.


A neurally plausible model learns successor representations in partially observable environments

Vertes, Eszter, Sahani, Maneesh

arXiv.org Machine Learning

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments. Here, we introduce a neurally plausible model using distributional successor features, which builds on the distributed distributional code for the representation and computation of uncertainty, and which allows for efficient value function computation in partially observed environments via the successor representation. We show that distributional successor features can support reinforcement learning in noisy environments in which direct learning of successful policies is infeasible.