vulnerability
Adversarial vulnerability for any classifier
Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.
With Friends Like These, Who Needs Adversaries?
The vulnerability of deep image classification networks to adversarial attack is now well known, but less well understood. Via a novel experimental analysis, we illustrate some facts about deep convolutional networks for image classification that shed new light on their behaviour and how it connects to the problem of adversaries. In short, the celebrated performance of these networks and their vulnerability to adversarial attack are simply two sides of the same coin: the input image-space directions along which the networks are most vulnerable to attack are the same directions which they use to achieve their classification performance in the first place. We develop this result in two main steps. The first uncovers the fact that classes tend to be associated with specific image-space directions. This is shown by an examination of the class-score outputs of nets as functions of 1D movements along these directions. This provides a novel perspective on the existence of universal adversarial perturbations. The second is a clear demonstration of the tight coupling between classification performance and vulnerability to adversarial attack within the spaces spanned by these directions.
Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We caught up with Oliver Chang whose research interests span deep reinforcement learning, autonomous vehicles, and explainable AI. We found out more about some of the projects he's worked on so far, what drew him to the field, and what future AI directions he's excited about. Could you give us a quick introduction to who you are, where you're studying, and the topic of your research? I'm specializing in reinforcement learning applied to autonomous vehicles and UAVs.
- Education (0.70)
- Government (0.48)
- North America > Canada > Alberta (0.14)
- North America > United States > Texas (0.04)
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- Information Technology > Security & Privacy (0.68)
- Banking & Finance > Trading (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (0.95)
- (4 more...)
- North America > United States (0.14)
- North America > The Bahamas (0.04)
- Government > Military (0.93)
- Information Technology > Security & Privacy (0.68)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Europe > Austria > Vienna (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Europe > France > Auvergne-Rhône-Alpes > Lyon > Lyon (0.04)
- (14 more...)
- Information Technology > Security & Privacy (1.00)
- Government (0.68)