Review for NeurIPS paper: Learning efficient task-dependent representations with synaptic plasticity

Neural Information Processing Systems 

This paper proposes a stochastic recurrent neural network that builds up its local information representation through a learning rule based on Boltzmann machines, but weighted by a task-dependent objective function, forming a so-called tri-factor learning rule. The results show how the network depends on the tasks of regression and classification in terms of the distribution of the tuning curves, population-averaged activities, and dependence on stimulus priors. The paper then considered how noises are redistributed in the neural manifold such that task performance can be achieved. Reviewers were overall positively predisposed towards this submission. Strengths include the coherent derivation of the proposed learning rule and the thorough analysis of its properties.