Goto

Collaborating Authors

 brainprop


Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

Neural Information Processing Systems

Much recent work has focused on biologically plausible variants of supervised learning algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons and learning in the brain depends on reward and punishment. We demonstrate a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers. The network chooses an action by selecting a unit in the output layer and uses feedback connections to assign credit to the units in successively lower layers that are responsible for this action. After the choice, the network receives reinforcement and there is no teacher correcting the errors.



smaller points and clarity of the equations, use of minus signs and symbols into account when we revise the paper

Neural Information Processing Systems

We thank the reviewers for their constructive comments. Here we will focus on the main concerns. This is a neuroscientific finding, which has been reviewed in e.g. Feedback alignment fails on simple problems and is known not work at all in deeper networks. AGREL dealt with a single hidden layer.



smaller points and clarity of the equations, use of minus signs and symbols into account when we revise the paper

Neural Information Processing Systems

We thank the reviewers for their constructive comments. Here we will focus on the main concerns. This is a neuroscientific finding, which has been reviewed in e.g. Feedback alignment fails on simple problems and is known not work at all in deeper networks. AGREL dealt with a single hidden layer.


Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

Neural Information Processing Systems

Much recent work has focused on biologically plausible variants of supervised learning algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons and learning in the brain depends on reward and punishment. We demonstrate a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers. The network chooses an action by selecting a unit in the output layer and uses feedback connections to assign credit to the units in successively lower layers that are responsible for this action. After the choice, the network receives reinforcement and there is no teacher correcting the errors.