A mean curvature flow arising in adversarial training

Bungert, Leon, Laux, Tim, Stinson, Kerrek

arXiv.org Artificial Intelligence 

In the last decade, machine learning algorithms and in particular deep learning have experienced an unprecedented success story. Such methods have proven their capabilities, inter alia, for the difficult tasks of image classification and generation. Most recently, the advent of large language models is expected to have a strong impact on various aspects of society. At the same time, the success of machine learning is accompanied by concerns about the reliability and safety of its methods. Already more than ten years ago it was observed that neural networks for image classification are susceptible to adversarial attacks [35], meaning that imperceptible or seemingly harmless perturbations of images can lead to severe misclassifications. As a consequence, the deployment of such methods in situations that affect the integrity and safety of humans, e.g., for self-driving cars or medical image classification, is risky. To mitigate these risks, the scientific community has been developing different approaches to robustify machine learning in the presence of potential adversaries.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found