saliency
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- (4 more...)
Grid Saliency for Context Explanations of Semantic Segmentation
Lukas Hoyer, Mauricio Munoz, Prateek Katiyar, Anna Khoreva, Volker Fischer
Recently,there has been agrowing interest in developing saliencymethods that providevisualexplanations ofnetworkpredictions. Still,theusability ofexisting methods is limited to image classification models. To overcome this limitation, we extend the existing approaches to generategrid saliencies, which provide spatially coherent visualexplanations for(pixel-level)denseprediction networks.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Maryland (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada > Newfoundland and Labrador > Labrador (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Asia > China > Guangdong Province (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
- Information Technology > Sensing and Signal Processing > Image Processing (0.68)
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification
KV cache stores key and value states from previous tokens to avoid re-computation, yet it demands substantial storage space, especially for long sequences. Adaptive KV cache compression seeks to discern the saliency of tokens, preserving vital information while aggressively compressing those of less importance.
TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
Mixup is a commonly adopted data augmentation technique for image classification. Recent advances in mixup methods primarily focus on mixing based on saliency. However, many saliency detectors require intense computation and are especially burdensome for parameter-heavy transformer models. To this end, we propose TokenMixup, an efficient attention-guided token-level data augmentation method that aims to maximize the saliency of a mixed set of tokens. TokenMixup provides 15 faster saliency-aware data augmentation compared to gradient-based methods. Moreover, we introduce a variant of TokenMixup which mixes tokens within a single instance, thereby enabling multi-scale feature augmentation. Experiments show that our methods significantly improve the baseline models' performance on CIFAR and ImageNet-1K, while being more efficient than previous methods. We also reach state-of-the-art performance on CIFAR-100 among from-scratch transformer models.