Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question


Like it or not, large language models quickly entered our lives. And due to their intense energy and water needs, they could also slide even faster in climate chaos. Some LLMs, however, could publish more pollution of the global warming than others, according to a new study.

Requests made to certain models generate up to 50 times more carbon emissions than others, according to a new study published in Communication borders. Unfortunately, and perhaps without surprise, the more precise models tend to have the biggest energy costs.

It is difficult to estimate how bad LLMs are for the environment, but Some studies suggested that Chatgpt training used up to 30 times more energy than average US uses in one year. What is not known is if some models have higher energy costs than their peers because they answer questions.

Researchers at the University of Applied Sciences of Hochschule München in Germany have evaluated 14 LLM ranging from 7 to 72 billion parameters – levers and dials that refine the understanding and generation of languages ​​of a model – on 1000 reference questions on various subjects.

The LLMS convert each word or part of the words into a prompt in a chain of numbers called token. Some LLM, in particular LLMS reasoning, also inserts special “reflection tokens” in the entry sequence to allow an internal calculation and additional reasoning before generating an output. This conversion and the subsequent calculations that the LLM performs on the energy of the tokens use energy and releases CO2.

Scientists compared the number of tokens generated by each of the models they tested. The models of reasoning, on average, created 543.5 tokens for reflection per question, while the concise models only required 37.7 tokens per question, according to the study. In the world of Chatgpt, for example, GPT-3.5 is a concise model, while GPT-4O is a model of reasoning.

This reasoning process increases energy needs, according to the authors. “The environmental impact of the question of LLM formed is strongly determined by their reasoning approach,” said the author of the study Maximilian Dauner, researcher at the University of Applied Sciences of Hochschule München, in a press release. “We found that the models compatible with reasoning have produced up to 50 times more CO2 emissions than concise response models.”

The more precise the models, the more carbon emissions they produced, revealed the study. The Cogito reasoning model, which has 70 billion parameters, has reached 84.9%precision, but it has also produced three times more CO2 emissions than similar size models that generate more concise responses.

“Currently, we see a compromise of clarified clarification inherent in LLM Technologies,” said Dauner. “None of the models that kept emissions less than 500 grams of CO2 equivalent has reached precision greater than 80% to correctly answer the 1,000 questions.” The equivalent of CO2 is the unit used to measure the climate impact of various greenhouse gases.

Another factor was the subject. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, have led to emissions up to six times higher than simpler subjects, according to the study.

However, there are some warnings. The emissions depend very much on the way in which local energy networks are structured and the models you examine, so it is clear to what extent these results are generalizable. However, the authors of the study said that they hoped that work will encourage people to be “selective and thoughtful” on the use of the LLM.

“Users can considerably reduce emissions by encouraging AI to generate concise responses or limit the use of high -capacity models to tasks that really require this power,” Dauner said in a press release.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *