We’ve been
thinking…a lot
This is no AI bubble – here’s why
Fears about another AI winter are misguided. AI is not more advanced than earlier eras, but economic and political conditions have changed.
How AI is reshaping news consumption
The Reuters Institute’s Digital News Report 2025 lands at a crucial moment for news media. Based on nearly 100,000 survey respondents across 48 markets, this year’s report is the most extensive to date, covering regions that together account for more than half the...
Why the calls to pause the EU AI Act?
The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive regulatory framework for AI, adopted in May 2024 and in force since August that year. Its approach is famously risk-based: it bans ‘unacceptable risk’ systems outright, imposes strict...
What global standards for artificial intelligence mean for AI assurance
Efforts to govern AI to date have relied heavily on voluntary principles and high-level ethics guidelines emphasising fairness, transparency, accountability, and safety. While useful for setting the tone of responsible AI development, guidelines often lacked the...
Artificial intelligence and UK national security
If the UK’s Strategic Defence Review (SDR) set out how AI will transform the UK’s warfighting capabilities within NATO, the National Security Strategy (NSS) (published only a week later) considers artificial intelligence and UK national security as a whole-of-society...
Artificial intelligence and the future of allied defence
As the UK’s 2025 Strategic Defence Review (SDR) makes unambiguously clear, AI - along with other emerging technologies - is becoming the linchpin of a new military reality. The shift is being driven by geopolitical realities: a resurgent and aggressive Russia, the...
Introducing BeehAIve®: ethical AI assurance at enterprise scale
Artificial intelligence is becoming embedded in the daily operations of enterprise organisations worldwide. But as adoption accelerates, so do the risks — and so does the scrutiny. With regulators, investors, and the public demanding greater accountability, AI...
AI and virtual manipulation in 2025
Artificial intelligence (AI) has moved to the centre of geopolitical strategy. The 2025 Virtual Manipulation Brief from the NATO Strategic Communications Centre of Excellence (StratCom COE) offers a sobering examination of how AI is now embedded in foreign information...
Shaping responsible AI in the energy sector
As artificial intelligence presents the possibility of becoming an integral driver of the energy transition, the industry finds itself at an important moment. AI promises enormous benefits - from real-time grid optimisation and predictive maintenance to customer...
When AI runs the company: autonomous agents at work
Imagine an office staffed entirely by AI agents - developers, project managers, finance clerks, HR reps - all working diligently behind their screens, clicking, typing, emailing, compiling, and occasionally, getting hilariously confused about whether a chatbot named...
Is AI sustainability a top-down or bottom-up problem?
Artificial intelligence (AI) is being framed in some quarters as being equal only to the invention of electricity, inevitably poised to transform everything from healthcare to finance, from supply chains to creativity. But as the global business community and...
AI is making coding across languages easier than ever
In his latest article How to Become a Multilingual Coder, AI pioneer Andrew Ng reflects on how artificial intelligence tools are reshaping the programming landscape - and what that means for developers everywhere. Rather than being tied to just one language, coders...
Indirect prompt injection: Gen AI’s hidden security flaw
As generative AI (GenAI) systems become integrated into business operations, a subtle yet significant security vulnerability has emerged: indirect prompt injection. Unlike direct prompt injection, where attackers input malicious prompts directly into an AI system,...
SAFE AI: a responsible AI framework for humanitarian action
Last month, the UK's Foreign, Commonwealth and Development Office (FCDO) hosted a roundtable and the soft launch of the SAFE AI project, a major new initiative funded by the FCDO and delivered by a consortium comprising the CDAC Network, The Alan Turing Institute, and...
Tracking AI incidents: OECD AIM and AIAAIC Repository
The societal stakes of artificial intelligence (AI)'s deployment continues to rise exponentially. Alongside the promised efficiency and innovation, the proliferation of AI systems has also generated a growing number of incidents where these systems malfunction, behave...
Quantisation in speech and language models
Quantisation underpins digital signal processing and now elements of contemporary machine learning. Digital audio and images are ubiquitous, and quantisation represents one of the core transformations from analogue to digital format - it converts continuous signals or...
What is Responsible AI in 2025?
In 2025, the concept of responsible AI (RAI) is shifting. Whereas it might once have been a collection of ethical principles mainly discussed in academic and policy circles, it's now a tangible, operational set of standards and practices increasingly embedded in the...
Governing AI agents
As artificial intelligence continues its break neck development pace, AI agents are emerging as the probable next frontier. These agents are not just advanced chatbots; they are systems capable of autonomously achieving goals in the world with minimal human input....
The business case for ethical AI: clear ROI
Operationalising AI ethics is no longer a luxury (if it ever was) - in the current climate it's a commercial necessity. As artificial intelligence plays an ever-larger role in business decision-making, the conversation around ethics is shifting from “nice to have” to...
Calculating AI’s energy use: frameworks and tools
Training large AI models - especially foundation models and generative architectures - can consume megawatt-hours of electricity, often with associated CO₂ emissions depending on the energy source. For AI developers, researchers and CTOs, quantifying and minimising...
Opt-out data use schemes: ethical implications
As governments and companies increasingly rely on data to power artificial intelligence (AI), shape public policy and deliver services, how that data is collected and used takes on enormous ethical significance. A key area is the use of opt-out data use schemes, in...
AI 2027: a race to superintelligence?
AI Futures Project has released ‘AI 2027’, detailed scenario forecasting how superintelligence might emerge from 2024 through 2027.
Le Chat and the European AI ambition
Le Chat, the conversational AI assistant developed by French startup Mistral AI, is Europe’s most serious effort yet to develop sovereign AI systems.
AI and energy
Artificial intelligence (AI) is scaling rapidly across industries, embedded in everything from content generation to predictive maintenance, autonomous vehicles to smart buildings. But beneath the surface lies an urgent, often overlooked, reality: there is no AI...
Building ethical AI: a step-by-step guide for developers
The ethical responsibility of those designing and deploying artificial intelligence (AI) has never been more important. While high-level ethical principles are useful, developers often face a critical question: what does building ethical AI actually look like in...
From frontier labs to public life: how AI shaped the world in 2024
The publication of the Stanford University Human-Centred Artificial Intelligence The AI Index Annual Report 2024 offers the most comprehensive overview yet of artificial intelligence’s sweeping impact, mapping a year that cemented AI’s place in science, policy,...
Applying AI to strategic warning
Strategic warning - the early detection of conflict, instability, or adversarial intent - is a pillar of national security. But today’s intelligence analysts are working with outdated infrastructure, brittle data ecosystems, and escalating cognitive load. The result?...
Exposing the gender gap in AI
Generative AI (Gen AI) promises to transform economies, workforces, and innovation ecosystems. But this transformation is unfolding along a sharply uneven playing field - especially when it comes to gender. A new World Economic Forum report makes the argument clear:...
The state of AI safety
As frontier AI systems advance toward superhuman capabilities, their developers increasingly acknowledge the potentially catastrophic consequences of failure. The three leading labs - OpenAI, Anthropic, and Google DeepMind - are planning to mitigate these risks, but...
Is the future of AI Physical AI?
Artificial Intelligence has evolved rapidly over the past two decades, what began as systems trained to identify patterns in data has now extended into AI agents capable of reasoning, acting, and even generating content with human-like fluency. As we look to the...