Menu

SAFE AI: a responsible AI framework for humanitarian action

by | Apr 28, 2025 | AI Ethics

Last month, the UK’s Foreign, Commonwealth and Development Office (FCDO) hosted a roundtable and the soft launch of the SAFE AI project, a major new initiative funded by the FCDO and delivered by a consortium comprising the CDAC Network, The Alan Turing Institute, and Humanitarian AI Advisory. A variety of participants joined, including international NGOs, tech leaders, donor governments and AI researchers, provided diverse reflections on the future of AI in humanitarian action

Opening the event, Matthew Wyatt, Director of Humanitarian, Migration, and Food Security at the FCDO, spoke powerfully about the unprecedented pressures facing the humanitarian system. With over 300 million people worldwide needing life-saving support this year, and resources available to assist only around 100 million, Wyatt underlined the critical role that artificial intelligence (AI) could play in addressing this widening gap. He called for greater collaboration between humanitarian actors and AI developers, arguing that such partnerships are essential for mitigating risks and ensuring that AI is deployed safely and responsibly in high-stakes environments.

The event featured three short, focused discussions centred around:

  • AI governance
  • AI assurance and trustworthy partnerships, and 
  • participatory approaches to developing AI solutions. 

Key insights from these discussions included:

The transformative potential of AI –  if properly governed

Participants emphasised that AI holds enormous potential to transform humanitarian operations, particularly as technological capabilities grow and costs decline. Innovations such as DeepSeek demonstrate the possibility of making advanced AI more accessible. However, lower costs alone are not enough; responsible design and inclusive implementation are crucial. Without appropriate safeguards, the sector risks a “race to the bottom,” where pressure to adopt AI rapidly could compromise governance and ethical standards.

Creating the right incentives for responsible AI

The discussion highlighted the urgent need to align incentives towards developing safe and responsible AI solutions. These incentives must reinforce commitments to localisation, community participation, accountability to affected populations, and transparency – ensuring that AI solutions deliver greater impact for crisis-affected populations.

Collaboration with the private sector: opportunities and risks

While partnerships with AI firms can drive innovation, participants warned that collaboration must be approached with vigilance. In recent years, some technology companies have moved away from ethical commitments in response to political pressures, raising concerns about the compatibility of their priorities with humanitarian principles such as impartiality, neutrality and do-no-harm. The dual-use nature of AI technologies – serving both defence and humanitarian aims – adds complexity to these relationships.

Establishing common standards for humanitarian AI

Trust in AI solutions can be strengthened by developing shared standards around explainability, transparency, accountability, and community participation. Such standards will be critical for evaluating the suitability of AI tools for humanitarian applications. Participants noted that achieving these standards requires significant investment in time and resources – both of which are currently insufficient in the humanitarian AI sector. However, existing non-technical standards on AI risk and AI governance could serve as a meaningful touchstone on which to develop more specific standards and guidance.

Building an evidence-based, collaborative future

Greater transparency, information-sharing, and evidence dissemination across the humanitarian sector and between humanitarian agencies are essential. Yet, heightened competition for shrinking funding streams has limited collaboration. To unlock the full potential of AI for humanitarian action, organisations must prioritise networked learning over competition and commit to sharing what works and what doesn’t when using AI to amplify impact and effect in humanitarian contexts.

Strengthening existing Communities of Practice (COP) and creating new platforms for exchange and co-creation were highlighted as key next steps. Benchmarking initiatives, such as those using internationally recognised principles like those developed by the Singaporean government, offer promising models for validating good practice.

Valuing foundational AI tools

Participants cautioned against an exclusive focus on cutting-edge generative AI tools and large language models (LLMs). In many cases, less advanced but robust machine learning models can offer equally valuable – or even more appropriate – solutions for humanitarian challenges. Good data stewardship was also identified as a foundational element of responsible AI. Ethical data architecture, protection, ownership, and use must be prioritised, with broader discussions around data sovereignty gaining traction across the sector.

Embedding participation and community agency in AI development

Effective humanitarian AI must be designed with, not just for, affected communities. Current gaps in community engagement must be addressed, recognising that inclusive co-design processes lead to better outcomes and are a cornerstone of responsible AI. However, participatory approaches demand substantial investment of both time and resources. Participants outlined key elements for effective participation, including the need to analyse and disrupt existing power structures, enhance community capacity, and focus specifically on those most affected.

Language diversity and local innovation

Language AI currently draws on highly limited datasets, dominated by just 17 languages. This significant gap must be addressed if AI technologies are to serve the needs of global populations effectively. Participants called for greater investment in supporting linguistic diversity within AI models.

Empowering local innovation was another major theme. The sector can build more resilient and context-appropriate responses by enabling communities to develop their own low-cost AI tools, such as TinyML models.

Listening to those most impacted

Finally, the discussions urged humanitarians to focus on those already impacted by AI technologies – even if they are not direct users – and ensure their voices are represented in the emerging humanitarian system.

As participants concluded, understanding the details of AI deployment – the technology being used, by whom, for what purpose, and in what context – remains crucial. AI can deliver on its promise to transform humanitarian action for the better only through careful, values-driven engagement.