Review for NeurIPS paper: Organizing recurrent network dynamics by task-computation to enable continual learning

Neural Information Processing Systems 

Summary and Contributions: This manuscript addresses the problem of continual learning in RNN. The authors propose a new learning rule that allows to organize the dynamics for different tasks into orthogonal subspaces. Using a set of neuroscience tasks, they show how this learning rule allows to avoid catastrophic interferences between tasks. By analyzing the dynamics of trained networks they provide evidence for why their learning rule is successful, it also allows them to discuss the problem of transfer learning. Strengths: - propose a new original solution to the problem of continual learning, which also allows them to address and understand under which conditions learning in one task can be transfered to learning off another task.