Goto

Collaborating Authors

Google researchers debut EfficientNets for CNN model scaling without the tedium • DEVCLASS

#artificialintelligence

Google researchers have open sourced EfficientNets, a method for scaling up CNN models that they claim is up to 10 times more efficient than current "state-of-the-art" techniques. The method is detailed in a paper which is being presented at next month's International Conference on Machine Learning, and promises to remove at least some of the "tedious manual tuning" conventional methods require. According to Mingxing Tan, Staff Software Engineer and Quoc V. Le, Principal Scientist, at Google AI, the researchers set out to find a way to scale up a CNN more accurately and efficiently than conventional practice which is to "arbitrarily increase the CNN depth or width, or to use larger input image resolution for training and evaluation." "While these methods do improve accuracy, they usually require tedious manual tuning, and still often yield suboptimal performance," the team points out. Their alternative was to use "a simple yet highly effective compound coefficient to scale up CNNs in a more structured manner. Unlike conventional approaches that arbitrarily scale network dimensions, such as width, depth and resolution, our method uniformly scales each dimension with a fixed set of scaling coefficients."


EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

arXiv.org Machine Learning

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.


Google AI Open-Sources 'EfficientDet', an Advanced Object Detection Tool

#artificialintelligence

Google has been always at the forefront when it comes to artificial intelligence research. Google AI team has recently open-sourced'EfficientDet', an advanced object detection tool using minimum compute. EfficientDet achieves better performance in comparison with YOLO or AmoebaNet when used with CPU or GPU'S. EfficientDet is the next-generation version of EfficientNet which use to be one of the advanced object detection models released in early 2019 for Coral boards.


Google AI open-sources EfficientDet for state-of-the-art object detection

#artificialintelligence

Members of the Google Brain team and Google AI this week open-sourced EfficientDet, an AI tool that achieves state-of-the-art object detection while using less compute. Creators of the system say it also achieves faster performance when used with CPUs or GPUs than other popular objection detection models like YOLO or AmoebaNet. When tasked with semantic segmentation, another task related to object detection, EfficientDet also achieves exceptional performance. Semantic segmentation experiments were conducted with the PASCAL visual object challenge data set. EfficientDet is the next-generation version of EfficientNet, a family of advanced object detection models made available last year for Coral boards.


rwightman/gen-efficientnet-pytorch

#artificialintelligence

A'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter efficient architectures derived from the MobileNet V1/V2 block sequence, including those found via automated neural architecture search. I originally implemented and trained some these models with code here, this repository contains just the GenEfficientNet models, validation, and associated ONNX/Caffe2 export code. I've managed to train several of the models to accuracies close to or above the originating papers and official impl. More pretrained models to come... The weights ported from Tensorflow checkpoints for the EfficientNet models do pretty much match accuracy in Tensorflow once a SAME convolution padding equivalent is added, and the same crop factors, image scaling, etc (see table) are used via cmd line args.