Menu

Is AI sustainability a top-down or bottom-up problem?

by | May 16, 2025 | Sustainability

Artificial intelligence (AI) is being framed in some quarters as being equal only to the invention of electricity, inevitably poised to transform everything from healthcare to finance, from supply chains to creativity. But as the global business community and governments race to deploy AI models at ever-larger scale how will AI fit within the planetary boundaries of sustainability?

Two recent contributions – one from AI & Climate Lead Dr Sasha Luccioni of Hugging Face and another from the Royal Academy of Engineering – shed light on the question from different but complementary angles. Taken together, they might be a blueprint for where businesses, governments and technology leaders should steer next.

Luccioni’s message is a rallying cry to the developer community: we can make AI smaller, leaner, and more efficient without sacrificing capability. Hugging Face’s work on “SmolLMs” – small language models designed to run locally on devices as lightweight as smartphones – challenges the narrative that bigger models are always better. Through curation of training data, knowledge distillation techniques, and quantisation (which reduces computational precision to make operations faster), the open-source community has shown that high performance can be achieved with far less energy and fewer resources. Notably, community preferences lean toward these smaller models: downloads of compressed or distilled versions often outpace those of their larger, more resource-intensive originals.

The environmental logic behind Luccioni’s argument is striking. Training a giant model generates tonnes of CO₂ emissions, not to mention enormous draws on water. Reusing, fine-tuning, or adapting existing models rather than starting afresh can sharply reduce this footprint. Crucially, using smaller and carefully optimised language models opens the door to using CPUs – processors that are generally much cheaper and resource-efficient than specialised GPUs, which are commonly used for large AI models. This makes cutting-edge tools accessible to those without racks of GPUs or deep-pocketed cloud contracts, thereby lowering barriers to innovation.

If the Hugging Face community is a font of bottom-up technically-led solutions, a Royal Academy of Engineering (RAE) report ‘Engineering Responsible AI: foundations for environmentally sustainable AI’ has a top-down, system-wide lens. It asks governments and businesses to think beyond model optimisation and address the broader environmental context. The message is less about code and more about infrastructure, supply chains, and governance.

The RAE warns that the rapid expansion of AI is driving up energy and water consumption at rates that could outstrip renewable generation and local water supplies, particularly as data centres cluster around metropolitan hubs. Semiconductors, the lifeblood of AI compute, require vast quantities of ultrapure water for manufacturing and involve mining critical materials like gallium and germanium. Without intervention, AI’s environmental footprint will not only strain national decarbonisation efforts but also exacerbate social inequities, as environmental burdens often fall unevenly on vulnerable communities.

To confront this, the RAE outlines five foundational pillars:

  • expanding environmental reporting mandates,
  • addressing information asymmetries across the AI value chain,
  • setting sustainability requirements for data centres,
  • rethinking data management practices, and
  • embedding sustainability into public investment.

Much like Hugging Face’s push for small, efficient models, the RAE recognises the potential of smaller, task-specific AI models, alternative chip architectures, and assurance technologies that monitor and mitigate environmental risks. But it also calls for structural reforms, from better data on energy and water use to new regulatory standards that push the entire AI ecosystem – not just enthusiastic open-source developers – towards sustainability.

For businesses, AI sustainability presents both a challenge and an opportunity. On the one hand, firms face increasing scrutiny over their environmental, social and governance (ESG) performance, and the opaque, energy-intensive nature of many AI systems sits awkwardly with corporate net-zero commitments. On the other hand, the growing appetite for efficient, adaptable AI opens up new commercial opportunities. Smaller models reduce technical debt and overhead costs, making them attractive not just for sustainability but also for their operational agility. And aligning early with forthcoming environmental standards – on data centre cooling, renewable energy sourcing, or hardware reuse – enables companies to capture first-mover advantages as sustainability reporting and regulations inevitably tighten.

The intersection of the two perspectives – bottom-up technological innovation and top-down systemic reform – points to a next step: integration. Technical ingenuity on its own can’t resolve the environmental challenges AI presents if it operates in a vacuum, without coordinated policy, reliable environmental data, and aligned commercial and regulatory incentives. Equally, regulation and reporting requirements will falter without creative solutions, experimentation, and the openness that potentially open source communities could uniquely provide. There is an obvious strategic and ethical imperative in encouraging the confluence of bottom-up and top-down approaches to sustainable AI.

Where next for AI sustainability? The answer is surely in knitting these two approaches together coherently. Businesses should embrace technically-driven suggestions such as using smaller models not as neat tricks, but as core components of their AI strategies. Policymakers, in turn, need to craft governance architectures that reward frugality and efficiency, while enabling the innovation ecosystem to flourish. Above all, both stakeholder groups must recognise that sustainability is not a bolt-on or a marketing slogan but a fundamental AI design challenge – one that will shape the long-term viability of the artificial intelligence revolution.