Collaborating Authors


Algorithm helps artificial intelligence systems dodge "adversarial" inputs


In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action -- steer right, steer left, or continue straight -- to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels?