top-1acc
c6e954799a0218f6d341ad5cbfb58999-Paper-Conference.pdf
Invideo recognition, weneedtosample multiple frames torepresent eachvideo which makesthe computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video.
ec20019911a77ad39d023710be68aaa1-Supplemental.pdf
From the results, we can see that AdvBN alone can improve the baseline model on all corruption types. Models of all methods are implemented based on ResNet-50 and trained on the original ImageNet training set. The default setting of our method on ResNet-50 uses 6 PGD steps, so the training time is longer than standard training for the same number of epochs.
NotAllImagesareWorth16x16Words: Dynamic TransformersforEfficientImageRecognition
They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead tohigher prediction accuracy,while italso results indrastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16x16 or 14x14. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist aconsiderable number of "easy" images which can be accurately predicted with amere number of4x4tokens, while only asmall fraction of "hard" ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient featurereuseandrelationship reusemechanisms acrossdifferentcomponents ofthe Dynamic Transformer to reduce redundant computations.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Europe > United Kingdom (0.04)