The environmental impact of AI’s core infrastructures—particularly data centres—has become a growing concern worldwide. Recent research highlights the significant carbon footprint of large language models (LLMs) in natural language processing (NLP) and emphasises the need for ecologically sustainable practices in AI development. However, this challenge is further complicated by revelations regarding the underreporting of emissions from tech giants’ data centres, which play a crucial role in AI operations, as reported by a recent article in The Guardian.
A study titled Green AI: Exploring Carbon Footprints, Mitigation Strategies, and Trade-offs in Large Language Model Training sheds light on the energy-intensive nature of AI model training. Training LLM-models such as BERT and T5, which are commonly used in NLP tasks, require vast computational resources, leading to substantial greenhouse gas emissions. The paper presents findings on how model size, hardware, and training approaches contribute to these emissions. It highlights that, as NLP advances, the environmental costs of AI are becoming more pressing. The paper suggests several strategies to reduce the carbon footprint of AI models without compromising performance. These include:
The research explores the trade-off between performance and AI sustainability. While more powerful models like BERT and T5 often require substantial computational power, optimised versions like DistilBERT demonstrate that it’s possible to achieve similar performance with fewer resources, which leads to lower emissions. However, this push for “Green AI” faces new challenges as AI’s energy demands surge, driven by applications like ChatGPT. The paper also touches on the financial aspect of AI sustainability practices, noting that more efficient GPUs like the A100 come with higher costs, potentially making them inaccessible for smaller organisations or individuals. However, the long-term benefits of reduced energy consumption and quicker training cycles can offset the initial investment. Another significant factor is that model training is not the only cause of environmental concerns from the AI infrastructure, but is only one of many, including data centres, raw material mining and e-waste.
Related to the growing issue of the environmental costs of AI is the underreporting of big tech companies related to their climate footprint. The Guardian recently revealed that emissions from in-house data centres owned by tech giants—such as Google, Microsoft, Meta, and Apple—may be 662% higher than officially reported. This underreporting is attributed to the use of renewable energy certificates (RECs), a market-based accounting tool that allows companies to claim renewable energy usage even if the energy consumed isn’t geographically matched to the facilities in question. When location-based emissions are considered, the carbon footprint of these data centres is significantly higher, with emissions sometimes increasing twentyfold.
Data centres already account for 1% to 1.5% of global electricity consumption, a figure expected to grow with the rise of AI. A single ChatGPT query, for instance, requires ten times the electricity of a Google search. By 2030, data centre emissions are projected to reach 2.5 billion metric tons of CO2 equivalent. While companies like Google and Microsoft aim for full renewable energy usage by 2030, the gap between official emissions figures and location-based calculations reveals a concerning trend. For example, Meta’s officially reported scope 2 emissions—those associated with purchased electricity—jump from 273 metric tons of CO2 equivalent to 3.8 million metric tons under location-based accounting. The industry’s reliance on creative accounting practices, such as RECs, allows firms to downplay the environmental costs of their AI-driven operations, even as energy demands grow exponentially.
While the development of sustainable AI models is crucial for reducing emissions, the broader issue of data centre energy consumption must also be addressed. Tech companies need to transition toward more transparent emissions reporting and adopt greener energy solutions, or the environmental costs of AI may far outweigh its benefits. At EthicAI, we support organisations and businesses in adopting more sustainable AI practices through both ethical and legal compliance. Our services offer comprehensive strategies for reducing carbon footprints, enhancing transparency, and ensuring compliance with emerging environmental standards. By providing tools to assess the environmental impact of AI models, guiding data centre efficiency improvements, and advising on responsible energy sourcing, we help companies align their AI operations with sustainability goals. This not only mitigates environmental risks but also ensures that businesses stay ahead of regulatory requirements and public expectations for corporate responsibility.
References:
Liu, V., Yin, Y. Green AI: exploring carbon footprints, mitigation strategies, and trade offs in large language model training. Discov Artif Intell 4, 49 (2024).
The Guardian. Isabel O’Brien. Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse? The Guardian.