Goto

Collaborating Authors

Algorithm helps artificial intelligence systems dodge 'adversarial' inputs

#artificialintelligence

In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action--steer right, steer left, or continue straight--to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called'adversarial inputs,' it might take unnecessary and potentially dangerous action.


Algorithm helps artificial intelligence systems dodge "adversarial" inputs

#artificialintelligence

In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action -- steer right, steer left, or continue straight -- to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels?


Algorithm helps artificial intelligence systems dodge "adversarial" inputs

#artificialintelligence

In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward. Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action -- steer right, steer left, or continue straight -- to avoid hitting a pedestrian that its cameras see in the road. But what if there's a glitch in the cameras that slightly shifts an image by a few pixels?


MIT Researchers Develop AI System To Cope With Imperfect Inputs

#artificialintelligence

Researchers from MIT have developed a new AI approach that could soon find its way into self-driving cars and industrial robots in smart factories. Designed to handle unpredictable interactions safely, the deep-learning algorithm promises to enhance the robustness of AI systems in safety-critical scenarios. From avoiding a pedestrian dashing across the road in unusually bad weather to overcoming the malicious obstruction of sensors in a manufacturing plant, the new system can enable AI systems to react in a robust manner even when critical inputs deviate due to either unreliable inputs or noise. The details of this new approach are outlined in a study by Michael Everett, Björn Lütjens, and Jonathan How from MIT. Titled "Certifiable robustness to adversarial state uncertainty in deep reinforcement learning", the study was published last month in IEEE's Transactions on Neural Networks and Learning Systems. The algorithm works by building a healthy "skepticism" of the measurements and inputs AI systems receive to help machines to navigate our real, imperfect world.


Certified Adversarial Robustness for Deep Reinforcement Learning

arXiv.org Machine Learning

Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certified defense for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst-case deviation in input space due to possible adversaries or noise. The approach is demonstrated on a Deep Q-Network policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios and a classic control task. This work extends our previous paper with new performance guarantees, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.