Moneycontrol PRO
HomeNewsEnvironmentAs AI becomes more pervasive, its environmental impact is on the rise

As AI becomes more pervasive, its environmental impact is on the rise

Estimates show that without significant interventions, AI models could consume more energy than the entire human workforce by 2025, considerably impacting global carbon reduction goals

August 08, 2023 / 12:43 IST
ChatGPT (Representative image)

Training GPT-3, the predecessor of ChatGPT, is estimated to have produced 502 tonnes of carbon equivalent emission


As generative artificial intelligence (AI) models have increasingly gained acceptance and popularity, there is an aspect of this new vogue that has largely been overlooked — its environmental cost.

Several studies have been done and much has been written about using AI to combat climate change. In fact, a survey by the Boston Consulting Group last year shows that as much as 87 percent of the 1,055 global leaders with decision-making power in AI and climate it surveyed believe that AI is an essential tool in the fight against climate change.

However, a growing number of studies are now focusing on the environmental impact of AI models themselves.

Generative AI models use graphics processing units (GPUs), which allow for many pieces of data to be processed relatively quickly. The use of GPUs to train GPT-3, the predecessor of ChatGPT, resulted in 502 tonnes of carbon equivalent emissions, according to a report by Stanford University. This is around 91 times more than the average emission by a person in a year, and 500 times more than a round trip (nearly 6,000 km) from New York City (US East coast) to San Francisco (West coast).

Water cost

Since AI/large language models (LLMs) need significant processing power, the data centres where they are physically stored generate a huge amount of heat — necessitating water-intensive cooling systems.

According to a study published this year, training GPT-3 would have needed as much as 3.5 million litres of water, provided that it was done so in US data centres. If the AI model was trained in Microsoft’s Asian data centres, the water usage would have been 4.9 million litres, enough to produce 600 BMW cars or 2,200 Tesla electric vehicles.


“ChatGPT needs to ‘drink’ a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers, depending on when and where ChatGPT is deployed. While a 500ml bottle of water might not seem too much, the total combined water footprint for inference is still extremely large, considering ChatGPT’s billions of users,” said the study.

GPT-3 was trained on 175 billion parameters. Meanwhile, GPT-4, which powers ChatGPT Plus, is considered to have been trained on a significantly larger number of parameters, with some media reports putting the figure at 1.76 trillion. This means that as new iterations are released with more parameters, AI models could require more energy and water-intensive cooling systems — increasing their environmental cost.

Making AI models more energy efficient

Now, AI models themselves are being tested to make AI systems more energy efficient. DeepMind, a subsidiary of Google, is currently experimenting on BCOOLER, an AI model that can be used to optimise energy use in data centres.

The live experiment of the model in two-real world facilities has resulted in 9 to 13 percent energy savings.

However, a recent study published in July says that most methods currently being discussed to reduce the cost of AI models on the planet ended up worsening their overall performance.

If things don’t improve soon, AI models could consume more energy than the entire human workforce by 2025, according to estimates by Gartner, a US-based technological research and consulting firm. This could significantly offset global carbon reduction goals.

Some developers are now also looking at building smaller AI models rather than increasing their size on each iteration. Meta’s LLaMa, a 65-billion-parameter LLM, is considered to be more potent than ChatGPT, which is much bigger in size.

“Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources,” the social media giant said in its statement announcing the release of the model.

Sreedev Krishnakumar
first published: Aug 8, 2023 12:43 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347