Hacking the Brain With Adversarial Images
–IEEE Spectrum Robotics Channel
The difference between the two pictures is that the one on the right has been tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is. In this case, the CNN think it's looking at a dog rather than a cat, but what's remarkable is that most people think the same thing. This is an example of what's called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they're looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they're looking at something they aren't. Visual classification algorithms powered by convolutional neural networks are commonly used to recognize objects in images.
IEEE Spectrum Robotics Channel
Feb-28-2018, 16:06:22 GMT
- AI-Alerts:
- 2018 > 2018-03 > AAAI AI-Alert for Mar 8, 2018 (1.00)
- Industry:
- Information Technology (0.69)
- Technology: