How to trick a neural network into thinking a panda is a vulture

#artificialintelligence 

When I go to Google Photos and search my photos for'skyline', it finds me this picture of the New York skyline I took in August, without me having labelled it! When I search for'cathedral', Google's neural networks find me pictures of cathedrals & churches I've seen. But of course, neural networks aren't magic–nothing is! I recently read a paper, "Explaining and Harnessing Adversarial Examples", that helped demystify neural networks a little for me. The paper explains how to force a neural network to make really egregious mistakes. It does this by exploiting the fact that the network is simpler (more linear!) than you might expect. It's important to understand that this doesn't explain all (or even most) kinds of mistakes neural networks make. There are a lot of possible mistakes!