In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action--steer right, steer left, or continue straight--to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called'adversarial inputs,' it might take unnecessary and potentially dangerous action.
Mar-9-2021, 03:50:31 GMT