How Efficient is EfficientNet?

#artificialintelligence 

If you've taken a look at the state of the art benchmarks/leaderboards for ImageNet sometime in the recent past, you've probably seen a whole lot of this thing called "EfficientNet." Now, considering that we're talking about a dataset of 14 million images, which is probably a bit more than you took on your last family vacation, take the prefix "Efficient" with a fat pinch of salt. But what makes the EfficientNet family special is that they easily outperform other architectures that have a similar computational cost. In this article, we'll discuss the core principles that govern the EfficientNet family. Primarily, we'll explore an idea called compound scaling which is a technique that efficiently scales neural networks to accommodate more computational resources that you might have/gain. In this report, I'll present the results I got from attempting to try the various EfficientNet scales on a dataset much smaller than ImageNet which is much more representative of the real world.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found