Even smartest AI models don't match human visual processing: How deep-network models take potentially dangerous 'shortcuts' in solving complex recognition tasks

#artificialintelligence 

Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York's Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York. The study employed novel visual stimuli called "Frankensteins" to explore how the human brain and DCNNs process holistic, configural object properties. "Frankensteins are simply objects that have been taken apart and put back together the wrong way around," says Elder. "As a result, they have all the right local features, but in the wrong places." The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not -- revealing an insensitivity to configural object properties. "Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain," Elder says.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found