Goto

Collaborating Authors

 kernel cell


KernelWarehouse: Rethinking the Design of Dynamic Convolution

Li, Chao, Yao, Anbang

arXiv.org Artificial Intelligence

Dynamic convolution learns a linear mixture of n static kernels weighted with their input-dependent attentions, demonstrating superior performance than normal convolution. However, it increases the number of convolutional parameters by n times, and thus is not parameter efficient. This leads to no research progress that can allow researchers to explore the setting n>100 (an order of magnitude larger than the typical setting n<10) for pushing forward the performance boundary of dynamic convolution while enjoying parameter efficiency. To fill this gap, in this paper, we propose KernelWarehouse, a more general form of dynamic convolution, which redefines the basic concepts of ``kernels", ``assembling kernels" and ``attention function" through the lens of exploiting convolutional parameter dependencies within the same layer and across neighboring layers of a ConvNet. We testify the effectiveness of KernelWarehouse on ImageNet and MS-COCO datasets using various ConvNet architectures. Intriguingly, KernelWarehouse is also applicable to Vision Transformers, and it can even reduce the model size of a backbone while improving the model accuracy. For instance, KernelWarehouse (n=4) achieves 5.61%|3.90%|4.38% absolute top-1 accuracy gain on the ResNet18|MobileNetV2|DeiT-Tiny backbone, and KernelWarehouse (n=1/4) with 65.10% model size reduction still achieves 2.29% gain on the ResNet18 backbone. The code and models are available at https://github.com/OSVAI/KernelWarehouse.


KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution

Li, Chao, Yao, Anbang

arXiv.org Artificial Intelligence

Dynamic convolution learns a linear mixture of n static kernels weighted with their sample-dependent attentions, demonstrating superior performance compared to normal convolution. However, existing designs are parameter-inefficient: they increase the number of convolutional parameters by n times. This and the optimization difficulty lead to no research progress in dynamic convolution that can allow us to use a significant large value of n (e.g., n > 100 instead of typical setting n < 10) to push forward the performance boundary. In this paper, we propose KernelWarehouse, a more general form of dynamic convolution, which can strike a favorable trade-off between parameter efficiency and representation power. Its key idea is to redefine the basic concepts of "kernels" and "assembling kernels" in dynamic convolution from the perspective of reducing kernel dimension and increasing kernel number significantly. In principle, KernelWarehouse enhances convolutional parameter dependencies within the same layer and across successive layers via tactful kernel partition and warehouse sharing. Specifically, KernelWarehouse sequentially divides a static kernel at any convolutional layer of a ConvNet into m disjoint kernel cells having the same dimensions first, and then computes each kernel cell as a linear mixture based on a predefined "warehouse" consisting of n kernel cells (e.g., n = 108) which is also shared to multiple neighboring convolutional layers, and finally replaces the static kernel by assembling its corresponding m mixtures in order, yielding a high degree of freedom to fit a desired parameter budget. To facilitate the learning of the attentions for summing up kernel cells, we also present a new attention function.