Leng, Cong
Extremely Low Bit Neural Network: Squeeze the Last Bit Out With ADMM
Leng, Cong (Alibaba Group) | Dou, Zesheng (Alibaba Group) | Li, Hao (Alibaba Group) | Zhu, Shenghuo (Alibaba Group) | Jin, Rong (Alibaba Group)
Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.
Shoot to Know What: An Application of Deep Networks on Mobile Devices
Wu, Jiaxiang (Institute of Automation, Chinese Academy of Sciences) | Hu, Qinghao (Institute of Automation, Chinese Academy of Sciences) | Leng, Cong (Institute of Automation, Chinese Academy of Sciences) | Cheng, Jian (Institute of Automation, Chinese Academy of Sciences)
Convolutional neural networks (CNNs) have achieved impressive performance in a wide range of computer vision areas. However, the application on mobile devices remains intractable due to the high computation complexity. In this demo, we propose the Quantized CNN (Q-CNN), an efficient framework for CNN models, to fulfill efficient and accurate image classification on mobile devices. Our Q-CNN framework dramatically accelerates the computation and reduces the storage/memory consumption, so that mobile devices can independently run an ImageNet-scale CNN model. Experiments on the ILSVRC-12 dataset demonstrate 4~6x speed-up and 15~20x compression, with merely one percentage drop in the classification accuracy. Based on the Q-CNN framework, even mobile devices can accurately classify images within one second.