fourier perspective
A Fourier Perspective on Model Robustness in Computer Vision
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. Data augmentation is a commonly used approach for improving robustness, however robustness gains are typically not uniform across corruption types. Indeed increasing performance in the presence of random noise is often met with reduced performance on other corruptions such as contrast change. Understanding when and why these sorts of trade-offs occur is a crucial step towards mitigating them. Towards this end, we investigate recently observed trade-offs caused by Gaussian data augmentation and adversarial training. We find that both methods improve robustness to corruptions that are concentrated in the high frequency domain while reducing robustness to corruptions that are concentrated in the low frequency domain. This suggests that one way to mitigate these trade-offs via data augmentation is to use a more diverse set of augmentations. Towards this end we observe that AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C benchmark.
Reviews: A Fourier Perspective on Model Robustness in Computer Vision
This paper explores a useful direction to study adversarial robustness and some experimental results add value to understanding this topic. Originality: The work seems original to me. Clarity: The presentation is mostly clear. Conversely, what are non-i.i.d ones? Do you mean training and test distributions are the same?
Reviews: A Fourier Perspective on Model Robustness in Computer Vision
The paper proposes an interesting angle to investigate robustness of convnets, looking at Fourier spectrum of images and/or perturbations. Two of the reviewers were very positive, while R2 raised some concerns and ultimately was not completely satisfied by the rebuttal. However, in discussion among the reviewers and the AC, everybody agreed that the paper has potentially important contributions. Despite the shortcomings I recommend to accept the paper as a poster. I do recommend that the authors aim to take the detailed comments and improvement suggestions by all the reviewers into account.
A Fourier Perspective on Model Robustness in Computer Vision
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. Data augmentation is a commonly used approach for improving robustness, however robustness gains are typically not uniform across corruption types. Indeed increasing performance in the presence of random noise is often met with reduced performance on other corruptions such as contrast change. Understanding when and why these sorts of trade-offs occur is a crucial step towards mitigating them. Towards this end, we investigate recently observed trade-offs caused by Gaussian data augmentation and adversarial training.
A Fourier Perspective on Model Robustness in Computer Vision
Yin, Dong, Lopes, Raphael Gontijo, Shlens, Jon, Cubuk, Ekin Dogus, Gilmer, Justin
Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. Data augmentation is a commonly used approach for improving robustness, however robustness gains are typically not uniform across corruption types. Indeed increasing performance in the presence of random noise is often met with reduced performance on other corruptions such as contrast change. Understanding when and why these sorts of trade-offs occur is a crucial step towards mitigating them. Towards this end, we investigate recently observed trade-offs caused by Gaussian data augmentation and adversarial training.