Everytime we use ChatGPT to summarise reports, Google Gemini to write an email or an AI image generator to create an image, it comes at a cost to the planet.
Generating one image takes as much energy as fully charging your smartphone, AI start-up Hugging Face and CarnegieMellon University reported to MIT Technology Review.
AI has made rapid advances over the past 18 months, and been incorporated in many aspects of our lives, from health, finance, education and transport.
However, the drive to auto-generate as much as possible costs the environment a huge amount of energy and carbon emissions. It is not only AI’s usage which does this, but training massive AI models is also incredibly energy intensive.
“At the moment a large factor driving energy consumption in training large AI models is the race for increasingly larger models trained on increasingly larger datasets,” said Dr Jo Plested, a researcher at University of NSW.
“These models require an enormous amount of compute to train which has environmental implications both in terms of the power consumption for training models, and the ever-increasing demand for building more high-performance compute clusters.”
So what is the carbon footprint of AI today?
The data
Training a single AI model such as ChatGPT 3 can emit 502 tonnes of carbon.
“Energy consumption with large language models can of course contribute to carbon emissions, particularly if that energy used is coming from non-renewable sources such as fossil fuels,” said Lauren Purcell from Carbon Positive Australia.
“We aren’t experts on this matter but knowing that we all have a carbon footprint and AI technology is not immune to this.”
Training large language models (LLM)
In a new paper, researchers at the University of Massachusetts, Amherst, found that training LLM such as Google Gemini and GPT-4 requires the model to be exposed to extensive datasets which are hundreds of billions of words to learn patterns and relationships.
The UMass Amherst research found that as these models advance, their size expands exponentially, leading to substantial computational demands for both training and inference processes.
“[The] competition in the field is mostly focused on increasing the size and therefore capability of models,” said Dr Plested. “The forefront of AI development is seen as working on improving the capabilities of the largest models, therefore researchers have to use these models to have their work taken seriously, even if they are working on areas that don’t involve increasing model size.
“It is difficult or even impossible to get published in the field without using huge compute resources.”
The UMass Amherst research revealed the computational and environmental burdens of training escalate with additional tuning steps further increasing the model’s final accuracy.
“The number of experiments needed for methods and results to be taken seriously at the top level keeps growing,” said Dr Plested.
“It is not possible to publish at the top level with a proof of concept of a new method and a limited number of experiments. A large amount [of] experiments to showcase the performance in a large range of different scenarios are needed.”
A more recent investigation found training GPT-3, which boasts 175 billion parameters, consumed 1287 MWh of electricity and produced 502 metric tons of carbon emissions, equating to the annual emissions of driving 112 gasoline-powered cars.
Not only training AI but its usage also costs the planet
It took over 590 million uses of the AI model BLOOM to reach the carbon cost of its training, revealed machine learning collaboration space Hugging Face to MIT Technology Review.
For very popular models, such as ChatGPT, it could take just a couple of weeks for such a model’s usage emissions to exceed its training emissions as large AI models get trained once but they can be used billions of times, said Sasha Luccioni, an AI researcher at Hugging Face.
ChatGPT has millions of users a day, many of whom prompt the model more than once.
AI usage: Specific task models
Image generation is the most energy and carbon-intensive AI-based task. While classification tasks for both images and text are on the lower end of the spectrum in terms of emission.
High standard deviation for image generation suggests a large variation between image generation models depending on the size of the image that they generate.
According to Everypixel Journal, as of August 2023, each day sees approximately 34 million new AI-generated images.
Generating 3,400 images with a powerful AI model, such as Stable Diffusion XL, is responsible for roughly as much carbon dioxide as driving the equivalent of 22.44 kilometres in an average gasoline-powered car.
To put this in context, the equatorial circumference of the Earth is 40,075 kilometres.
The 34 million AI-generated images created per day is equivalent to an average gasoline-powered car doing 5.6 laps around the Earth.
In contrast, the least carbon-intensive text generation model they examined was responsible for as much CO2 as driving 1 metre in a similar vehicle.
Similarly, creating text 1,000 times only uses as much energy as 16 per cent of a full smartphone charge.
AI usage: Large multimodals
It is obvious that using large generative models to create outputs is far more energy-intensive than smaller AI models tailored for specific tasks.
Hugging Face revealed that classifying movie reviews whether they are ‘positive’ or ‘negative’ required a multi-purpose model which consumed 30 times more energy than task-specific models.
Generative AI consumes more energy because it is doing many things at once such as generating, classifying and summarising instead of focusing on one task.
“If you’re doing a specific application, like searching through email… do you really need these big models that are capable of anything? I would say no,” said Luccioni.
What does it mean for us today?
These generative AI models are used millions if not billions of times every single day and AI will continue to advance and integrate further into our daily lives.
Melissa Heikkiläarchive from MIT Technology Review says carbon emissions add up quickly as businesses and companies integrate powerful AI models into many different products from email to word processing.
“Owners of compute resources also need to be looking at their environmental impact and renewable energy. Increasingly large organisations with large AI compute resources are starting to make them carbon neutral with renewable energy facilities,” said Dr Plested.
“It is important for AI research communities to have these conversations and prioritise environmental sustainability.”
The environmental footprint of AI model is the cumulative impact of small choices made by individual users from generating text to creating images.
The collective engagement of millions of users with AI on a daily basis contributes to its overall environmental impact.
“Studies like these make the energy consumption and emissions related to AI more tangible and help raise awareness that there is a carbon footprint associated with using AI,” said Vijay Gadepally, a research scientist at the MIT Lincoln lab.
“I would love it if this became something that consumers started to ask about.”
Main image generated with Leonardo.ai.