For all the advances enabled by artificial intelligence, from speech recognition to self-driving cars, AI systems consume a lot of power and can generate high volumes of climate-changing carbon emissions. A study last year found that training an off-the-shelf AI language-processing system produced 1,400 pounds of emissions – about the amount produced by flying one person roundtrip between New York and San Francisco. The full suite of experiments needed to build and train that AI language system from scratch can generate even more: up to 78,000 pounds, depending on the source of power. But there are ways to make machine learning cleaner and greener, a movement that has been called "Green AI." Some algorithms are less power-hungry than others, for example, and many training sessions can be moved to remote locations that get most of their power from renewable sources.
Artificial intelligence has a terrible carbon footprint. Researchers at Stanford University, Facebook AI Research, and Canada's McGill University have developed a tool to measure the hidden cost of machine learning. The "experiment impact tracker" quantifies how much electricity a machine learning project will consume, and its cost in carbon emissions. The team first measured the energy cost of a specific artificial intelligence (AI) model--a challenge because a single machine often trains several models concurrently, while each session also draws power for shared overhead functions like data storage and cooling. The researchers then translated power consumption into carbon emissions, whose blend of renewable and fossil fuels varies by location and time of day, by tapping into public sources about this energy mix.
The COVID crisis has skyrocketed the applications of artificial intelligence -- from tackling this global pandemic, to being a vital tool in managing various business processes. Despite its benefits, AI has always been scrutinised for its ethical concerns like existing biases and privacy issues. However, this technology also has some significant sustainability issues – it is known to consume a massive amount of energy, creating a negative impact on the environment. As AI technology is getting advanced in predicting weather, understanding human speech, enhancing banking payments, and revolutionising healthcare, the advanced models are not only required to be trained on large datasets, but also require massive computing power to improve its accuracy. Such heavy computing and processing consumes a tremendous amount of energy and emits carbon dioxide, which has become an environmental concern. According to a report, it has been estimated that the power required for training AI models emits approximately 626,000 pounds (284 tonnes) of carbon dioxide, which is comparatively five times the lifetime emissions of the average US car.
Faced with dire reports of looming global catastrophe due to the ongoing climate emergency, many of us are taking a long, hard look at the carbon footprint of our daily lives -- whether it's from the food we eat, how much we drive or how often we fly. But sometimes it's the most intangible of things that may actually be pumping out more carbon than we think -- namely, the surprisingly large carbon footprint that can be associated with creating machine learning models -- the same technology that underlies the apps on our smartphones, digital personal assistants and computers. While using such tech might not necessarily emit all that much carbon, the cause for concern lies behind the carbon impact of the computational processes that go into training AI -- and whether researchers and companies can be well-informed enough to choose less carbon-intensive options. Until now, artificial intelligence researchers have not really had an easily available method to quantify the carbon impact. But that's changing, thanks to a team from Canada's Montreal Institute for Learning Algorithms (MILA), Element AI and Polytechnique Montreal, which recently released a tool designed to help those working in the AI field estimate how much carbon is produced in training their machine learning models.
There's been a reckoning in recent years when it comes to measuring bias in machine learning. We now know that these "unbiased" automated tools are actually far from unprejudiced, and there's a growing demand that researchers think about how their products might screw over or endanger the lives of others before they unleash them on society. It's not just the final products we should be worried about, however, but also the consequences of building them. As the world burns in Facebook feeds and in backyards, the carbon footprints of even the most innocuous things are coming under scrutiny. It's sparked debates around AC units, straws, face scrubs, plastic bags, air travel.