Goto

Collaborating Authors

 expressive recurrent vision model


Stable and expressive recurrent vision models

Neural Information Processing Systems

Primate vision depends on recurrent processing for reliable perception. A growing body of literature also suggests that recurrent connections improve the learning efficiency and generalization of vision models on classic computer vision challenges. Why then, are current large-scale challenges dominated by feedforward networks? We posit that the effectiveness of recurrent vision models is bottlenecked by the standard algorithm used for training them, back-propagation through time (BPTT), which has O(N) memory-complexity for training an N step model. Thus, recurrent vision model design is bounded by memory constraints, forcing a choice between rivaling the enormous capacity of leading feedforward models or trying to compensate for this deficit through granular and complex dynamics.


Review for NeurIPS paper: Stable and expressive recurrent vision models

Neural Information Processing Systems

Only on my second reading of this paper I started understanding what the paper was about. The authors have done so much work that it is very hard to explain everything in mere eight pages. I think quite a few details could be moved to supplementary materials and more emphasis placed on key findings. For instance, describing results in figures rather than text and using the gargantuan abbreviations only when absolutely needed would already be a great improvement (e.g., Line 145; Lines 165-170; Lines 220-232; Lines 280-292). Specifically, I think I would find the following structure more clear: Just have a single figure with a FF CNN, (C-)BPTT / (C-RBP) hGRU / convLSTM, so that we can clearly see how each additional thing helps.


Review for NeurIPS paper: Stable and expressive recurrent vision models

Neural Information Processing Systems

This paper addresses the limitations of BPTT by proposing a new method (C-RBP) with O(1) memory-complexity. The proposed approach is evaluated on Pathfinder, showing reasonably good results. Reviewers were unanimously positive, though there were some minor concerns about clarity and the baselines used. I found the authors' response compelling in response to both points, as did the reviewers, though I would strongly encourage the authors to take the clarity suggestions seriously, as I feel they will significantly improve the paper. I recommend this paper should be accepted as a spotlight.

  expressive recurrent vision model, neurips paper

Stable and expressive recurrent vision models

Neural Information Processing Systems

Primate vision depends on recurrent processing for reliable perception. A growing body of literature also suggests that recurrent connections improve the learning efficiency and generalization of vision models on classic computer vision challenges. Why then, are current large-scale challenges dominated by feedforward networks? We posit that the effectiveness of recurrent vision models is bottlenecked by the standard algorithm used for training them, "back-propagation through time" (BPTT), which has O(N) memory-complexity for training an N step model. Thus, recurrent vision model design is bounded by memory constraints, forcing a choice between rivaling the enormous capacity of leading feedforward models or trying to compensate for this deficit through granular and complex dynamics.