Collaborating Authors

Measuring the Algorithmic Efficiency of Neural Networks Machine Learning

Three factors drive the advance of AI: algorithmic innovation, data, and the amount of compute available for training. Algorithmic progress has traditionally been more difficult to quantify than compute and data. In this work, we argue that algorithmic progress has an aspect that is both straightforward to measure and interesting: reductions over time in the compute needed to reach past capabilities. We show that the number of floating-point operations required to train a classifier to AlexNet-level performance on ImageNet has decreased by a factor of 44x between 2012 and 2019. This corresponds to algorithmic efficiency doubling every 16 months over a period of 7 years. By contrast, Moore's Law would only have yielded an 11x cost improvement. We observe that hardware and algorithmic efficiency gains multiply and can be on a similar scale over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.

What is More Important? Productivity or Efficiency?


Is Productivity more important than Efficiency? It is another in our "Great Articles You may have missed" series… Would you rather do the same with less, or do more with the same? That's the conundrum presented in a recent Harvard Business Review (HBR) article, "Great Companies Obsess Over Productivity, Not Efficiency." I'd venture to say most of us don't really know the difference between productivity and efficiency. First, let's differentiate between productivity and efficiency.

Can we afford AI?


Of all the concerns surrounding artificial intelligence these days -- and no, I don't mean evil robot overlords, but more mundane things like job replacement and security -- perhaps none is more overlooked than cost. This is understandable, considering AI has the potential to lower the cost of doing business in so many ways. But AI is not only expensive to acquire and deploy, it also requires a substantial amount of compute power, storage, and energy to produce worthwhile returns. Back in 2019, AI pioneer Elliot Turner estimated that training the XLNet natural language system could cost upwards of $245,000 – roughly 512 TPUs running at full capacity for 60 straight hours. And there is no guarantee it will produce usable results.

The best microwaves for helping you out in the kitchen


Where would we be without this kitchen superhero? Whether you're out of ideas, short on time, or simply looking for the easiest option, the microwave is always there to make sure you're fed. If in doubt, opt for the microwave. In the UK, 89% of households own a microwave. With almost every kitchen rocking a microwave, we don't need to wax lyrical about these little cookers.

Statistically efficient thinning of a Markov chain sampler Machine Learning

It is common to subsample Markov chain output to reduce the storage burden. Geyer (1992) shows that discarding $k-1$ out of every $k$ observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning MCMC output cannot improve statistical efficiency. Here we suppose that it costs one unit of time to advance a Markov chain and then $\theta>0$ units of time to compute a sampled quantity of interest. For a thinned process, that cost $\theta$ is incurred less often, so it can be advanced through more stages. Here we provide examples to show that thinning will improve statistical efficiency if $\theta$ is large and the sample autocorrelations decay slowly enough. If the lag $\ell\ge1$ autocorrelations of a scalar measurement satisfy $\rho_\ell\ge\rho_{\ell+1}\ge0$, then there is always a $\theta<\infty$ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with $\rho_\ell =\rho^{|\ell|}$ for some $-1<\rho<1$. For an AR(1) process it is possible to compute the most efficient subsampling frequency $k$. The optimal $k$ grows rapidly as $\rho$ increases towards $1$. The resulting efficiency gain depends primarily on $\theta$, not $\rho$. Taking $k=1$ (no thinning) is optimal when $\rho\le0$. For $\rho>0$ it is optimal if and only if $\theta \le (1-\rho)^2/(2\rho)$. This efficiency gain never exceeds $1+\theta$. This paper also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes.