Hacked Dog Pics Can Play Tricks on Computer Vision AI

IEEE Spectrum Robotics 

Tricking Google's computer vision AI into seeing a dog as a pair of human skiers may seem mostly harmless. But the possibilities become more unnerving when considering how hackers could trick a self-driving car's AI into seeing a plastic bag instead of a child up ahead. Or making future surveillance systems overlook a gun because they see it as a toy doll. An independent AI research group run by MIT students has demonstrated a new way to fool the computer vision algorithms that enable AI systems to see the world--an approach that could prove up to 1000 times as fast as other existing ways of hacking "black box" systems whose inner workings remain hidden to outsiders. That idea of a black box perfectly describes the neural networks behind the deep learning algorithms enabling computer vision services for Google, Facebook, and other companies.