regularize machine
Reviews: Learning from brains how to regularize machines
CNNs, like visual cortex, build a representation of the visual world that is useful to the "viewer". We have known for a while that CNNs trained on object recognition tasks capture some (but not all) aspects of the representation computed by primate visual cortex. Here the authors propose to bridge the gap by explicitly encouraging a CNN to build a representation that is "similar" to the one computed by the visual cortex of mice. This is a neat idea and certainly a novel one. The paper is clearly written, which I appreciated.
Learning from brains how to regularize machines
Despite impressive performance on numerous visual tasks, Convolutional Neural Networks (CNNs) --- unlike brains --- are often highly sensitive to small perturbations of their input, e.g. We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity. We presented natural images to mice and measured the responses of thousands of neurons from cortical visual areas. Next, we denoised the notoriously variable neural activity using strong predictive models trained on this large corpus of responses from the mouse visual system, and calculated the representational similarity for millions of pairs of images from the model's predictions. We then used the neural representation similarity to regularize CNNs trained on image classification by penalizing intermediate representations that deviated from neural ones.
Learning from brains how to regularize machines
Li, Zhe, Brendel, Wieland, Walker, Edgar, Cobos, Erick, Muhammad, Taliah, Reimer, Jacob, Bethge, Matthias, Sinz, Fabian, Pitkow, Zachary, Tolias, Andreas
Despite impressive performance on numerous visual tasks, Convolutional Neural Networks (CNNs) --- unlike brains --- are often highly sensitive to small perturbations of their input, e.g. We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity. We presented natural images to mice and measured the responses of thousands of neurons from cortical visual areas. Next, we denoised the notoriously variable neural activity using strong predictive models trained on this large corpus of responses from the mouse visual system, and calculated the representational similarity for millions of pairs of images from the model's predictions. We then used the neural representation similarity to regularize CNNs trained on image classification by penalizing intermediate representations that deviated from neural ones.