An Experimental Study of Semantic Continuity for Deep Learning Models
Wu, Shangxi, Lu, Dongyuan, Zhao, Xian, Chen, Lizhang, Sang, Jitao
–arXiv.org Artificial Intelligence
Deep learning models can achieve state-of-the-art performance across a wide range of computer vision tasks. From supervised learning and unsupervised learning to the now popular self-supervised learning, new training paradigms have progressively improved the efficiency of utilizing training data. However, the existence of issues such as adversarial examples makes us realize that the current training paradigms still do not make sufficient use of datasets. Adversarial images, which appear nearly identical to the original images, can cause significant changes in model output. In this paper, we find that many common non-semantic perturbations can also lead to semantic-level interference in model outputs, as illustrated in Figure 1. This phenomenon indicates that the representations learned by deep learning models are discontinuous in semantic space. Ideally, derived samples with the same semantic information should be located in the neighborhood of the original samples, but they are often mapped far from the original samples in the model's output space.
arXiv.org Artificial Intelligence
Jun-17-2024
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (0.47)
- Technology: