We’ve been
thinking…a lot
The Top Ten AI articles in 2025
Every year a handful of the articles we write rise to the top - the ones our readers share most widely, return to repeatedly or cite when trying to make sense of AI’s accelerating impact. We’ve rounded up the ten most-read articles published on our site over the past...
‘Adversarial poetry’ exposes a weakness in AI
Mention 'jailbreaking' of AI models and the expression usually conjures up the image of a determined attacker working through elaborate prompt engineering tricks. They might coax the model into role-play or nudge it step by step into revealing information it...
The uncomfortable truth about AGI hype
In 2024, ChatGPT alone allegedly produced around 1/1000 of all words produced by humanity each day (Altman, 2024). Now, around 40% of text on active web pages originates from AI-generated sources (Spenneman, 2025). Certain voices still claim that Artificial General...
Trust and artificial intelligence
A new survey paints a picture of a world increasingly unsure about the current path of artificial intelligence (AI). The Edelman Trust Barometer Flash Poll doesn't reveal a total backlash against the technology but there is a clear crisis of confidence. According to...
Designing citizen-centric AI
Artificial intelligence in the public sector is often framed as a way to do more with less and boost productivity. But that's true only if it's designed and deployed around the needs, rights and expectations of the people it affects. Citizen-centred AI is a discipline...
Why we have the AI we do
Why do we have the AI we do? How power and scale – not intelligence – built the machines we now call smart.
This is no AI bubble – here’s why
Fears about another AI winter are misguided. AI is not more advanced than earlier eras, but economic and political conditions have changed.
How AI is reshaping news consumption
The Reuters Institute’s Digital News Report 2025 lands at a crucial moment for news media. Based on nearly 100,000 survey respondents across 48 markets, this year’s report is the most extensive to date, covering regions that together account for more than half the...
Why the calls to pause the EU AI Act?
The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive regulatory framework for AI, adopted in May 2024 and in force since August that year. Its approach is famously risk-based: it bans ‘unacceptable risk’ systems outright, imposes strict...
What global standards for artificial intelligence mean for AI assurance
Efforts to govern AI to date have relied heavily on voluntary principles and high-level ethics guidelines emphasising fairness, transparency, accountability, and safety. While useful for setting the tone of responsible AI development, guidelines often lacked the...
Artificial intelligence and UK national security
If the UK’s Strategic Defence Review (SDR) set out how AI will transform the UK’s warfighting capabilities within NATO, the National Security Strategy (NSS) (published only a week later) considers artificial intelligence and UK national security as a whole-of-society...
Artificial intelligence and the future of allied defence
As the UK’s 2025 Strategic Defence Review (SDR) makes unambiguously clear, AI - along with other emerging technologies - is becoming the linchpin of a new military reality. The shift is being driven by geopolitical realities: a resurgent and aggressive Russia, the...
Introducing BeehAIve®: ethical AI assurance at enterprise scale
Artificial intelligence is becoming embedded in the daily operations of enterprise organisations worldwide. But as adoption accelerates, so do the risks — and so does the scrutiny. With regulators, investors, and the public demanding greater accountability, AI...
AI and virtual manipulation in 2025
Artificial intelligence (AI) has moved to the centre of geopolitical strategy. The 2025 Virtual Manipulation Brief from the NATO Strategic Communications Centre of Excellence (StratCom COE) offers a sobering examination of how AI is now embedded in foreign information...
Shaping responsible AI in the energy sector
As artificial intelligence presents the possibility of becoming an integral driver of the energy transition, the industry finds itself at an important moment. AI promises enormous benefits - from real-time grid optimisation and predictive maintenance to customer...
When AI runs the company: autonomous agents at work
Imagine an office staffed entirely by AI agents - developers, project managers, finance clerks, HR reps - all working diligently behind their screens, clicking, typing, emailing, compiling, and occasionally, getting hilariously confused about whether a chatbot named...
Is AI sustainability a top-down or bottom-up problem?
Artificial intelligence (AI) is being framed in some quarters as being equal only to the invention of electricity, inevitably poised to transform everything from healthcare to finance, from supply chains to creativity. But as the global business community and...
AI is making coding across languages easier than ever
In his latest article How to Become a Multilingual Coder, AI pioneer Andrew Ng reflects on how artificial intelligence tools are reshaping the programming landscape - and what that means for developers everywhere. Rather than being tied to just one language, coders...
Indirect prompt injection: Gen AI’s hidden security flaw
As generative AI (GenAI) systems become integrated into business operations, a subtle yet significant security vulnerability has emerged: indirect prompt injection. Unlike direct prompt injection, where attackers input malicious prompts directly into an AI system,...
SAFE AI: a responsible AI framework for humanitarian action
Last month, the UK's Foreign, Commonwealth and Development Office (FCDO) hosted a roundtable and the soft launch of the SAFE AI project, a major new initiative funded by the FCDO and delivered by a consortium comprising the CDAC Network, The Alan Turing Institute, and...
Tracking AI incidents: OECD AIM and AIAAIC Repository
The societal stakes of artificial intelligence (AI)'s deployment continues to rise exponentially. Alongside the promised efficiency and innovation, the proliferation of AI systems has also generated a growing number of incidents where these systems malfunction, behave...
Quantisation in speech and language models
Quantisation underpins digital signal processing and now elements of contemporary machine learning. Digital audio and images are ubiquitous, and quantisation represents one of the core transformations from analogue to digital format - it converts continuous signals or...
What is Responsible AI in 2025?
In 2025, the concept of responsible AI (RAI) is shifting. Whereas it might once have been a collection of ethical principles mainly discussed in academic and policy circles, it's now a tangible, operational set of standards and practices increasingly embedded in the...
Governing AI agents
As artificial intelligence continues its break neck development pace, AI agents are emerging as the probable next frontier. These agents are not just advanced chatbots; they are systems capable of autonomously achieving goals in the world with minimal human input....
The business case for ethical AI: clear ROI
Operationalising AI ethics is no longer a luxury (if it ever was) - in the current climate it's a commercial necessity. As artificial intelligence plays an ever-larger role in business decision-making, the conversation around ethics is shifting from “nice to have” to...
Calculating AI’s energy use: frameworks and tools
Training large AI models - especially foundation models and generative architectures - can consume megawatt-hours of electricity, often with associated CO₂ emissions depending on the energy source. For AI developers, researchers and CTOs, quantifying and minimising...
Opt-out data use schemes: ethical implications
As governments and companies increasingly rely on data to power artificial intelligence (AI), shape public policy and deliver services, how that data is collected and used takes on enormous ethical significance. A key area is the use of opt-out data use schemes, in...
AI 2027: a race to superintelligence?
AI Futures Project has released ‘AI 2027’, detailed scenario forecasting how superintelligence might emerge from 2024 through 2027.
Le Chat and the European AI ambition
Le Chat, the conversational AI assistant developed by French startup Mistral AI, is Europe’s most serious effort yet to develop sovereign AI systems.
AI and energy
Artificial intelligence (AI) is scaling rapidly across industries, embedded in everything from content generation to predictive maintenance, autonomous vehicles to smart buildings. But beneath the surface lies an urgent, often overlooked, reality: there is no AI...




























