robust accuracy
- North America > United States > Illinois (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology > Security & Privacy (0.68)
- Government (0.68)
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization
However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
Supplementary Materia: Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View Zhiyu Lin
Fan et al. [2021] incorporates high-frequency views into contrastive learning, leading to the transfer However, there are also several works that challenge the validity of this assumption. Yin et al. [2019] proposes a robustness analysis strategy based on Fourier Heatmaps, which utilizes a model's sensitivity to frequency-bases. Maiya et al. [2021] believes that model robustness does not have an intrinsic connection In addition to the perspective on frequency components, Chen et al. [2021] has shown that the CNN model should be consistent with the Human Visual System, with To show the power law distribution of natural images, we select CIFAR-10 Krizhevsky et al. [2009], Tiny-ImageNet Le and Y ang [2015] and ImageNet Deng et al. [2009] to conduct experiments. We show an example of division on ImageNet, as shown in Fig.2, in which the high-and low-frequency components of the image obtained according to the division radius are also in line with our We conduct experiments on naturally trained models. We conduct experiments on test set of CIFAR10, Tiny-ImageNet, ImageNet-1k datasets.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Beijing > Beijing (0.04)
A Broader Impacts
MIM to enhance the adversarial robustness of downstream models. It is important to highlight that our paper's focus is specifically on the adversarial robustness of ViTs. It is shown that our method can provide an effective defense against severe adversarial attacks. We propose two hypotheses for explaining the reason behind our method's effectiveness: (1) Given Figure 3 (a) shows the comparison between the results of noise being known and unknown. When the attacker can access the noise, our model's robust accuracy does not improve much as The results indicate that both proposed hypotheses are true.
- Information Technology > Security & Privacy (0.38)
- Government > Military (0.38)
- North America > United States (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > Middle East > Jordan (0.04)