Artificial intelligence (AI) has moved to the centre of geopolitical strategy. The 2025 Virtual Manipulation Brief from the NATO Strategic Communications Centre of Excellence (StratCom COE) offers a sobering examination of how AI is now embedded in foreign information manipulation and interference (FIMI), with an emphasis on Russian campaigns targeting NATO and its partners.
The report not only chronicles how AI tools are being misused to distort truth, inflame divisions, and manipulate populations – but also presents a challenge: how can ethical AI development be resilient in the face of weaponised digital manipulation?
A double-edged sword
The StratCom COE report underscores how AI, particularly generative and automated systems, has become central to the operational toolkit of actors engaged in disinformation. The use of AI enables not just the automation of message distribution, but the creation of content – textual, visual, and even video-based – that is increasingly difficult to distinguish from authentic material.
Synthetic content at scale
Russian influence operations, as documented in the report, are leveraging AI to scale up hostile narratives across multiple platforms. What once took coordinated human effort can now be executed in minutes by AI-driven bots or synthetic media generators. These systems are deployed to: Amplify pro-Kremlin messaging on platforms such as Telegram, X (formerly Twitter), and VKontakte (VK); create AI-generated personas that build credibility over time and flood the information space with misleading or emotionally manipulative content.
AI’s scalability allows for what the report calls “asymmetric power”: the ability for relatively small state or non-state actors to disproportionately affect the information environment with limited human resources.
Targeting NATO
A central theme of the report is the concerted effort to frame NATO as either ineffectual or dangerously aggressive. AI plays a crucial role in this narrative war. Sophisticated manipulation campaigns utilise language models and image generation to circulate fabricated stories about NATO aggression or internal discord, tailor disinformation to resonate with specific regional audiences, such as in the Baltics and Poland and exploit platform incentives – for example, monetisation structures on Telegram – to sustain the financial viability of manipulation campaigns
Importantly, AI enables message customisation at scale. Natural language processing models allow bad actors to generate regionally and culturally tailored content, increasing the emotional resonance and perceived authenticity of hostile messaging.
Platform ecosystems and an ethical vacuum
The report expands its focus to ten platforms, reflecting the complex digital ecosystems where manipulation now occurs: from mainstream services like Facebook and YouTube to fringe networks like Odnoklassniki (OK). AI is central not just to manipulation efforts, but also to the platforms themselves, which increasingly rely on AI to moderate, curate, and monetise content.
However, the ethical deployment of platform-based AI remains questionable. The report raises concerns about:
- Opaque recommendation algorithms that may unintentionally amplify disinformation
- Inconsistent content moderation policies and their AI-based enforcement
- Financial incentives that prioritise engagement over factual integrity
As long as profit-driven optimisation remains the dominant AI logic on major platforms, the report claims they risk becoming unwitting accomplices in the spread of foreign disinformation.
The erosion of trust
At the heart of the AI debate is trust. The report shows how synthetic content – from deepfakes to AI-generated text – can sow not just falsehoods, but doubt. When audiences cannot easily distinguish between what is real and fake, they may disengage entirely, leading to ‘information nihilism’, where all sources are viewed as equally untrustworthy. Polarisation, as users retreat into ideologically homogenous digital enclaves and delegitimisation, where democratic institutions and journalistic integrity are eroded.
This dynamic illustrates the ethical paradox of AI: a tool with enormous potential to support informed societies is also a vehicle for confusion, manipulation, and mistrust when deployed irresponsibly.
Lessons from the virtual frontline
The StratCom COE report doesn’t only catalogue problems; it also suggests paths forward. Some key lessons emerge:
1. Transparency must be non-negotiable
From algorithmic content moderation to synthetic media generation, transparency is critical. Developers and platforms should adopt explainable AI models, disclose when content is AI-generated, and provide users with tools to assess credibility.
2. Context-aware AI design
AI systems should be developed with cultural, linguistic, and political contexts in mind. This reduces the risk of manipulation by bad actors who exploit generalised models to create targeted propaganda.
3. Human-in-the-loop governance
Fully autonomous content curation and moderation systems are vulnerable to exploitation and blind spots. Human oversight is essential in high-stakes contexts, particularly those involving political discourse and security.
4. Cross-sector collaboration
Governments, private companies, civil society, and academic institutions must work together. Information manipulation is a systemic challenge that transcends national borders and sectors, requiring collaborative safeguards.
The ethical imperative
The findings of the Virtual Manipulation Brief 2025 are more than merely geopolitical observations. The misuse of AI in the information space affects the credibility of AI technologies as a whole, undermining public trust and regulatory confidence.
Ethical AI adopters must build tools that resist co-option, ensuring their models cannot be easily weaponised. Educate clients and users about how to recognise manipulation and misinformation and advocate for policy frameworks that align with democratic values and human rights
By embedding ethical foresight into every stage of AI development, businesses can help inoculate societies against the corrosive effects of virtual manipulation.
The 2025 Virtual Manipulation Brief provides an unsettling snapshot of our current information environment. But it also provides clarity. It shows where the vulnerabilities are, and how AI, when wielded unethically, can deepen divides, distort reality, and weaken institutions.
Within that challenge lies opportunity.
The same technologies used to deceive can also be used to defend. AI can bolster fact-checking, improve media literacy, and detect coordinated inauthentic behaviour at speed and scale. But only if the actors building and deploying it are guided by principles of integrity, accountability, and transparency.