Fooling Google's image-recognition AI 1000x faster

#artificialintelligence 

By attacking even black-box systems w/hidden information, MIT CSAIL students show that hackers can break the most advanced AIs that may someday appear in TSA security lines and self-driving cars. Groups like the TSA are even considering using them to detect suspicious objects in security lines. But neural networks can easily be fooled into thinking that, say, a photo of a turtle is actually a gun. This can have major consequences: imagine if, simply by changing a few pixels, a bitter ex-boyfriend could put private photos up on Facebook, or a terrorist could disguise a bomb to evade detection. According to a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), such hacks are even easier to pull off than we thought.