Review for NeurIPS paper: Meta-Learning through Hebbian Plasticity in Random Networks

Neural Information Processing Systems 

Weaknesses: The fact that every neuron's plasticity parameter can be learned makes it difficult to interpret what is being learned. Are the weights effectively learning to relax to the same steady state from random initial conditions (in which case the plasticity rules are essentially encoding weights)? The illustrated weight attractors do not provide much insight. The results for average distance traveled are not particularly convincing because the static weight networks outperform the Hebbian weights so drastically for two out of three situations. The requirement that networks evolve from random initial weights may be limiting performance, and from a biological standpoint completely random weights without any structure is probably not the appropriate starting point (evolution and development may optimize this initial condition).