Menu

Artificial intelligence and the future of allied defence

by | Jun 23, 2025 | AI Security

As the UK’s 2025 Strategic Defence Review (SDR) makes unambiguously clear, AI – along with other emerging technologies – is becoming the linchpin of a new military reality. The shift is being driven by geopolitical realities: a resurgent and aggressive Russia, the erosion of US security guarantees in Europe, and a renewed urgency for NATO nations to rearm, reorient and reimagine defence around 21st-century threats. AI is at the core of the UK’s attempt to meet the moment.

The UK’s Strategic Defence Review is rightly ambitious in calling for AI to be embedded across the force. But true responsibility requires a layered ethical architecture – one that recognises that what is permissible in a back-office logistics application is very different from what is permissible in autonomous targeting.

AI at war: lessons from Ukraine

The SDR repeatedly references the war in Ukraine as both a warning and a model. The sheer scale of AI-enabled combat – particularly through autonomous drones, algorithmic target acquisition, and real-time ISTAR (intelligence, surveillance, target acquisition, reconnaissance) analysis – has illustrated what contemporary high-intensity war looks like.

One example is the ‘Recce Strike Complex’ model the British Army is developing, drawing explicitly from Ukrainian experience. This approach fuses long-range fires, AI-enabled surveillance, and drone swarms into a cohesive strike ecosystem. The goal is a tenfold increase in land lethality compared to conventional armoured brigades – a transformation impossible without AI-driven decision systems and autonomous assets.

Similarly, the Royal Air Force’s participation in the Global Combat Air Programme (GCAP) reflects a recognition that sixth-generation air power must be inherently AI-native. The future combat air system will include not just manned aircraft but autonomous collaborative platforms, decision aids, and smart munitions. All will rely on AI not only for control but also for tactical reasoning.

Procurement reform

Integrating AI at scale requires more than just the technology – it needs institutional transformation. The SDR makes this explicit. Defence Reform, the restructuring programme launched in February 2025, is attempting to eliminate legacy procurement models, flattening hierarchies, and establishing new organisational bodies with a mandate to drive AI adoption.

Among these is the newly created UK Defence Innovation (UKDI) body, which will oversee the AI pipeline from discovery to deployment. With a ringfenced annual budget of £400 million and a remit that spans dual-use and military-specific innovation, UKDI is charged with delivering the MOD’s AI ambitions at speed. It is complemented by a new Defence Research and Evaluation organisation, which will serve as a gateway to academia and pre-commercial R&D.

The National Armaments Director (NAD), a role reinstated and empowered in this review, will provide top-down industrial leadership. Procurement is being streamlined under a ‘three-lane’ model, with contracts for modular upgrades expected within a year and rapid tech exploitation contracts targeted within three months. At least 10% of the MOD’s equipment budget is earmarked for novel technologies, including AI.

This is a radical departure from the previous pace of defence acquisition – often criticised as being years behind the curve and is vital for the rapid adoption of a rapidly developing technology. The SDR’s ambition is to emulate the pace of innovation seen in the commercial tech sector, particularly in AI, where breakthroughs occur in weeks rather than financial years.

Ethics, escalation and the AI arms race

Any serious national strategy for AI in defence however must grapple with a central dilemma: how to remain competitive in a rapidly accelerating technological contest without compromising the values that underpin democratic legitimacy. The UK’s Strategic Defence Review implicitly acknowledges this tension. While it commits to AI development “at wartime pace,” it also signals the government’s intention to embed this work within existing ethical and regulatory frameworks and international agreements on responsible AI use.

The difficulty, of course, is that the UK and its NATO allies are operating in a geopolitical environment where key adversaries may not share this caution. Russia, China, Iran and North Korea are all developing AI-enabled capabilities, from autonomous weapons to cognitive electronic warfare, with limited transparency and, in many cases, little concern for international norms. This asymmetry creates a pressure to match pace without matching methods.

Defining ethical red lines: tactical, operational and strategic

Ethical decision-making in military AI is not a single calculation – it varies profoundly depending on context, scale and consequence. A framework for the three levels of warfare: tactical, operational, and strategic, offers a useful lens for attempting to define ethical red lines – boundaries that AI should not cross, based on both legal norms and national values.

Tactical: the fight on the ground

At the tactical level – on the battlefield, in real time – the greatest ethical challenge lies in the delegation of lethal force. Here, the red line is clearest:

No fully autonomous systems should be permitted to make kill decisions without meaningful human control.

This reflects the UK’s commitment to international humanitarian law (IHL) and its core principle of distinction – the obligation to differentiate between combatants and civilians. Even when operating at machine speed, there must be a human accountable for use of force.

That said, AI can and should support tactical decision-making in non-lethal roles: navigation, target identification, route planning, and casualty evacuation, where speed and data fusion increase survivability and reduce risk to forces and civilians alike.

The ethical tightrope is not just whether AI makes a decision, but how a human interprets and relies on it. Overreliance or blind trust in opaque algorithms – so-called “automation bias” – can also be ethically hazardous, even if formal control is retained.

Operational: coordinating campaigns and systems

At the operational level – where units are coordinated across domains and over time – the red lines become more blurred and contested.

AI must not be allowed to autonomously escalate conflict without a clear human chain of accountability.

Autonomous systems operating in swarms or coordinating across domains (e.g. air and cyber) introduce a risk of unintended consequence. For instance, an AI-enabled air defence system responding automatically to a perceived threat in a contested environment might trigger a broader chain of events, including cross-border escalation.

Operational AI also encompasses logistics and ISTAR, while these are often framed as ethically neutral, they are not: decisions about resource allocation, battlefield surveillance, or pattern-of-life analysis can disproportionately affect civilian populations or create bias-driven targeting profiles.

Red lines at this level must address transparency, auditability, and proportionality. If commanders cannot explain why a system prioritised a target or planned a strike route, the ethical and legal ground becomes shaky.

Strategic: policy, posture and escalation

At the strategic level, AI touches the most consequential questions: deterrence posture, nuclear command and control, and national security decision-making. Here, ethical red lines are not about individual decisions, but systemic risk.

AI must never be in a position to unilaterally initiate or escalate to strategic conflict, including nuclear use.

This is less about technical design and more about governance. Strategic decision-making – particularly under crisis conditions – relies on nuanced judgment, political calculus, and ethical trade-offs that no AI system can replicate. Delegating too much authority to automated decision-support systems risks “automation entrapment,” where human leaders feel compelled to act because the machine “advises” it.

Additionally, AI systems at this level often process massive data streams, including from social media, cyber environments, and economic indicators. While this can aid decision-making, it also opens new avenues for disinformation, manipulation, and misinterpretation.

Red lines at the strategic level must include constraints on the use of opaque or black-box AI for threat assessment, policy development, or crisis response. Transparency to oversight bodies – whether parliamentary, judicial, or alliance-based – is essential.

By drawing these red lines at every level of warfare explicitly, the UK has an opportunity to set the pace not only in capability, but in moral leadership. It can help NATO define not just what AI-enabled warfare looks like – but what must remain off-limits, no matter how tempting the strategic advantage.

Maintaining the balance – developing systems that are both effective in combat and consistent with British and NATO values – is a challenge. But it also presents an opportunity. The UK can set the benchmark for responsible and ethical military AI: showing that it is possible to develop autonomous systems that are auditable, interoperable, and human-in-the-loop by default. By embedding ethics into design, not as an afterthought but as a requirement, Britain can strengthen both its moral authority and operational resilience.