Why we need to do a better job of measuring AI's carbon footprint
I've just published a story about the first attempt to calculate the broader emissions of one of the most popular AI products right now--large language models--and how it could help nudge the tech sector to do more to clean up its act. AI startup Hugging Face calculated the emissions of its large language model BLOOM, and its researchers found that the training process emitted 25 metric tons of carbon. However, those emissions doubled when they took the wider hardware and infrastructure costs of running the model into account. They published their work in a paper posted on arXiv that's yet to be peer reviewed. The finding in itself isn't hugely surprising, and BLOOM is way "cleaner" than large language models like OpenAI's GPT-3 and Meta's OPT, because it was trained on a French supercomputer powered by nuclear energy. Instead, the significance of this work is that it points to a better way to calculate AI models' climate impact, by going beyond just the training to the way they're used in the real world.
Nov-15-2022, 12:00:00 GMT