Goto

Collaborating Authors

The Highest-Trending Research Papers From CVPR 2020

#artificialintelligence

CVPR 2020 is yet another big AI conference that takes place 100% virtually this year. Here we've picked up the research papers that started trending within the AI research community months before their actual presentation at CVPR 2020. These papers cover the efficiency of object detectors, novel techniques for converting RGB-D images into 3D photography, and autoencoders that go beyond the capabilities of generative adversarial networks (GANs) with respect to image generation and manipulation. Subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new summaries. If you'd like to skip around, here are the papers we featured: Model efficiency has become increasingly important in computer vision.


Weekly Papers Multi-Label Deep Forest (MLDF); Huawei UK Critiques DeepMind α-Rank

#artificialintelligence

Close to a thousand machine learning papers are published each and every week. On Fridays, Synced selects seven studies from the last seven days that present topical, innovative or otherwise interesting or important research that we believe may be of special interest to our readers. Author: Liang Yang, Xi-Zhu Wu, Yuan Jiang, Zhi-Hua Zhou from National Key Laboratory for Novel Software Technology, Nanjing University Abstract: In multi-label learning, each instance is associated with multiple labels and the crucial task is how to leverage label correlations in building models. Deep neural network methods usually jointly embed the feature and label information into a latent space to exploit label correlations. However, the success of these methods highly depends on the precise choice of model depth.


Google AI open-sources EfficientDet for state-of-the-art object detection

#artificialintelligence

Members of the Google Brain team and Google AI this week open-sourced EfficientDet, an AI tool that achieves state-of-the-art object detection while using less compute. Creators of the system say it also achieves faster performance when used with CPUs or GPUs than other popular objection detection models like YOLO or AmoebaNet. When tasked with semantic segmentation, another task related to object detection, EfficientDet also achieves exceptional performance. Semantic segmentation experiments were conducted with the PASCAL visual object challenge data set. EfficientDet is the next-generation version of EfficientNet, a family of advanced object detection models made available last year for Coral boards.


Relevant-features based Auxiliary Cells for Energy Efficient Detection of Natural Errors

arXiv.org Machine Learning

Deep neural networks have demonstrated state-of-the-art performance on many classification tasks. However, they have no inherent capability to recognize when their predictions are wrong. There have been several efforts in the recent past to detect natural errors but the suggested mechanisms pose additional energy requirements. To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors. In particular, we append Relevant-features based Auxiliary Cells (RACs) which are class specific binary linear classifiers trained on relevant features. The consensus of RACs is used to detect natural errors. Based on combined confidence of RACs, classification can be terminated early, thereby resulting in energy efficient detection. We demonstrate the effectiveness of our technique on various image classification datasets such as CIFAR-10, CIFAR-100 and Tiny-ImageNet.


Deep Learning Approximation: Zero-Shot Neural Network Speedup

arXiv.org Machine Learning

Neural networks offer high-accuracy solutions to a range of problems, but are costly to run in production systems because of computational and memory requirements during a forward pass. Given a trained network, we propose a techique called Deep Learning Approximation to build a faster network in a tiny fraction of the time required for training by only manipulating the network structure and coefficients without requiring re-training or access to the training data. Speedup is achieved by by applying a sequential series of independent optimizations that reduce the floating-point operations (FLOPs) required to perform a forward pass. First, lossless optimizations are applied, followed by lossy approximations using singular value decomposition (SVD) and low-rank matrix decomposition. The optimal approximation is chosen by weighing the relative accuracy loss and FLOP reduction according to a single parameter specified by the user. On PASCAL VOC 2007 with the YOLO network, we show an end-to-end 2x speedup in a network forward pass with a 5% drop in mAP that can be re-gained by finetuning.