sdnet-18-all 11
Revisiting Sparse Convolutional Model for Visual Recognition - Supplementary Material - Xili Dai
As we explain next, this is made possible by sparse modeling with CSC-layers. 2 B.1 Method We apply our visualization method described above to the SDNet-18 trained on ImageNet (see Sec. 4) The results are provided in Figure B.1. It can be observed that the shallow layers (e.g., layer 1 - 5) capture rich details of the SDNet-18 progressively remove some of the unrelated details from the network input. In Figure C.1, we provide a visualization of the learned dictionary in the first layer of SDNet-18 The result shows our CSC-layer feature maps are highly sparse. Table D.1 shows the comparison of SDNet-18/34 and SDNet-18/34-All on CIFAR-10, CIFAR-100 Both models have high accuracy performance while SDNet-18 is significantly faster. Figure B.1: Visualization of feature maps for 5 images at selected layers of a SDNet-18 trained on Figure C.1: Visualization of the learned dictionary of first layer of SDNet-18-All trained on ImageNet.
- North America > United States > Ohio (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > China > Hong Kong (0.04)
- (2 more...)