A 2-Nets: Double Attention Networks
Chen, Yunpeng, Kalantidis, Yannis, Li, Jianshu, Yan, Shuicheng, Feng, Jiashi
–Neural Information Processing Systems
Learning to capture long-range relations is fundamental to image/video recognition. Existing CNN models generally rely on increasing depth to model such relations which is highly inefficient. In this work, we propose the "double attention block", a novel component that aggregates and propagates informative global features from the entire spatio-temporal space of input images/videos, enabling subsequent convolution layers to access features from the entire space efficiently. The component is designed with a double attention mechanism in two steps, where the first step gathers features from the entire space into a compact set through second-order attention pooling and the second step adaptively selects and distributes features to each location via another attention. The proposed double attention block is easy to adopt and can be plugged into existing deep neural networks conveniently.
Neural Information Processing Systems
Feb-14-2020, 05:27:39 GMT
- Technology: