Better artificial intelligence does not mean better models of biology

Linsley, Drew, Feng, Pinyuan, Serre, Thomas

arXiv.org Artificial Intelligence 

Vision science has always developed models at smaller scales than the frontier of artificial intelligence. This is partially because of the academic roots of vision science, partially because of a well-founded desire to lean on reductionism to truly understand how vision works, and partially because attempts at incorporating biological inspiration into DNNs have been hamstrung by implementations that are poorly suited for GPUs. For example, most attempts at biologically-inspired DNNs have focused on inducing architectural constraints like recurrence [34,61,116-118] and different forms of feedback [59,60] that are not explicitly included in DNNs but known to play key roles in primate vision [119-123] . While we believe these approaches are important for Neuroscience and especially for constraining model hypothesis spaces in small data settings, the methods used for implementation are undeniably challenging to scale [118,124] and it is possible that the induced computational strategies could be learned by a less-constrained DNN trained with the "right" data and objective [125] . Thus, it may be that a more effective approach to reverse-engineering vision than hand-designing small-scale recurrent DNNs could be to train DNNs at large scales with approximations of the kinds of data and routines that shape biological visual systems.