Goto

Collaborating Authors

 Cheng, Jian


From Hashing to CNNs: Training Binary Weight Networks via Hashing

AAAI Conferences

Deep convolutional neural networks (CNNs) have shown appealing performance on various computer vision tasks in recent years. This motivates people to deploy CNNs to real-world applications. However, most of state-of-art CNNs require large memory and computational resources, which hinders the deployment on mobile devices. Recent studies show that low-bit weight representation can reduce much storage and memory demand, and also can achieve efficient network inference. To achieve this goal, we propose a novel approach named BWNH to train Binary Weight Networks via Hashing. In this paper, we first reveal the strong connection between inner-product preserving hashing and binary weight networks, and show that training binary weight networks can be intrinsically regarded as a hashing problem. Based on this perspective, we propose an alternating optimization method to learn the hash codes instead of directly learning binary weights. Extensive experiments on CIFAR10, CIFAR100 and ImageNet demonstrate that our proposed BWNH outperforms current state-of-art by a large margin.


Shoot to Know What: An Application of Deep Networks on Mobile Devices

AAAI Conferences

Convolutional neural networks (CNNs) have achieved impressive performance in a wide range of computer vision areas. However, the application on mobile devices remains intractable due to the high computation complexity. In this demo, we propose the Quantized CNN (Q-CNN), an efficient framework for CNN models, to fulfill efficient and accurate image classification on mobile devices. Our Q-CNN framework dramatically accelerates the computation and reduces the storage/memory consumption, so that mobile devices can independently run an ImageNet-scale CNN model. Experiments on the ILSVRC-12 dataset demonstrate 4~6x speed-up and 15~20x compression, with merely one percentage drop in the classification accuracy. Based on the Q-CNN framework, even mobile devices can accurately classify images within one second.


Recommendation by Mining Multiple User Behaviors with Group Sparsity

AAAI Conferences

Recently, some recommendation methods try to improvethe prediction results by integrating informationfrom user’s multiple types of behaviors. How to modelthe dependence and independence between differentbehaviors is critical for them. In this paper, we proposea novel recommendation model, the Group-Sparse MatrixFactorization (GSMF), which factorizes the ratingmatrices for multiple behaviors into the user and itemlatent factor space with group sparsity regularization.It can (1) select out the different subsets of latent factorsfor different behaviors, addressing that users’ decisionson different behaviors are determined by differentsets of factors;(2) model the dependence and independencebetween behaviors by learning the sharedand private factors for multiple behaviors automatically; (3) allow the shared factors between different behaviorsto be different, instead of all the behaviors sharingthe same set of factors. Experiments on the real-world dataset demonstrate that our model can integrate users’multiple types of behaviors into recommendation better,compared with other state-of-the-arts.


Confidence Inference in Bayesian Networks

arXiv.org Artificial Intelligence

We present two sampling algorithms for probabilistic confidence inference in Bayesian networks. These two algorithms (we call them AIS-BN-mu and AIS-BN-sigma algorithms) guarantee that estimates of posterior probabilities are with a given probability within a desired precision bound. Our algorithms are based on recent advances in sampling algorithms for (1) estimating the mean of bounded random variables and (2) adaptive importance sampling in Bayesian networks. In addition to a simple stopping rule for sampling that they provide, the AIS-BN-mu and AIS-BN-sigma algorithms are capable of guiding the learning process in the AIS-BN algorithm. An empirical evaluation of the proposed algorithms shows excellent performance, even for very unlikely evidence.