Distilling What We Know

Communications of the ACM 

The sheer size and complexity of today's generative pretrained transformer (GPT) models is nothing less than astounding. OpenAI's GPT-3, for example, possesses somewhere in the neighborhood of 175 billion parameters, and there is speculation GPT-4 could have as many as 10 trillion parameters.a All of this introduces enormous overhead in terms of required cloud resources, including compute cycles and energy consumption. At the moment, the computer power required to train state-of-the-art artificial intelligence (AI) models is rising at a rate of 15x every two years.b The cost of training a large GPT model can run into the millions of dollars.c

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found