MindSet: Vision. A toolbox for testing DNNs on key psychological experiments

Biscione, Valerio, Yin, Dong, Malhotra, Gaurav, Dujmovic, Marin, Montero, Milton L., Puebla, Guillermo, Adolfi, Federico, Heaton, Rachel F., Hummel, John E., Evans, Benjamin D., Habashy, Karim, Bowers, Jeffrey S.

arXiv.org Artificial Intelligence 

Multiple benchmarks have been developed to assess the alignment between deep neural networks (DNNs) and human vision. In almost all cases these benchmarks are observational in the sense they are composed of behavioural and brain responses to naturalistic images that have not been manipulated to test hypotheses regarding how DNNs or humans perceive and identify objects. Here we introduce the toolbox MindSet: Vision, consisting of a collection of image datasets and related scripts designed to test DNNs on 30 psychological findings. In all experimental conditions, the stimuli are systematically manipulated to test specific hypotheses regarding human visual perception and object recognition. In addition to providing pre-generated datasets of images, we provide code to regenerate these datasets, offering many configurable parameters which greatly extend the dataset versatility for different research contexts, and code to facilitate the testing of DNNs on these image datasets using three different methods (similarity judgments, out-ofdistribution classification, and decoder method), accessible at https://github.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found