A mean curvature flow arising in adversarial training
Bungert, Leon, Laux, Tim, Stinson, Kerrek
–arXiv.org Artificial Intelligence
In the last decade, machine learning algorithms and in particular deep learning have experienced an unprecedented success story. Such methods have proven their capabilities, inter alia, for the difficult tasks of image classification and generation. Most recently, the advent of large language models is expected to have a strong impact on various aspects of society. At the same time, the success of machine learning is accompanied by concerns about the reliability and safety of its methods. Already more than ten years ago it was observed that neural networks for image classification are susceptible to adversarial attacks [35], meaning that imperceptible or seemingly harmless perturbations of images can lead to severe misclassifications. As a consequence, the deployment of such methods in situations that affect the integrity and safety of humans, e.g., for self-driving cars or medical image classification, is risky. To mitigate these risks, the scientific community has been developing different approaches to robustify machine learning in the presence of potential adversaries.
arXiv.org Artificial Intelligence
Apr-22-2024
- Genre:
- Instructional Material > Course Syllabus & Notes (0.34)
- Research Report (0.64)
- Industry:
- Government (0.34)
- Health & Medicine > Diagnostic Medicine
- Imaging (0.34)
- Information Technology (0.54)
- Technology: