Menu

Designing citizen-centric AI

by | Oct 30, 2025 | AI Development

Artificial intelligence in the public sector is often framed as a way to do more with less and boost productivity. But that’s true only if it’s designed and deployed around the needs, rights and expectations of the people it affects. Citizen-centred AI is a discipline that blends service design, data governance, public engagement and accountable engineering to deliver outcomes people actually value.

In the UK, the government’s AI Exemplars Programme makes this shift explicit. Rather than betting on one over-arching system, it’s a portfolio approach – scanning for promising opportunities, piloting quickly, then scaling what works – while learning openly from false starts. Recent exemplars range from AI-assisted hospital discharge summaries (with clinicians firmly in control), to tools that help local authorities extract data from decades of paper planning records so decisions are faster and more consistent, to GOV.UK Chat for plain-English answers grounded in official guidance, and productivity assistants for policy teams. The message is: use AI where it improves a service people already need, and keep humans accountable for decisions that affect lives. 

What ‘citizen-centred AI’ actually means

Citizen-centric AI starts from outcomes that matter to people – safety, fairness, speed, dignity – and works backwards. It assumes that an AI model is only one part of a wider service that includes policies, processes, staff capabilities and feedback loops. It insists on proportionate use of automation: augment where possible, automate where safe, and always keep a human route for redress.

The idea is supported by a growing evidence base. OECD analysis situates government not just as regulator and investor but as user and developer of AI, with benefits tied to responsiveness and accountability – provided the groundwork in data management, ethics and capability is done well. Recent chapters on AI in service design note that most OECD countries now use AI to improve delivery, but warn that governance and transparency need to keep pace. 

Public attitude research reinforces this. Surveys consistently find that people support AI when it is well-governed and improves specific public outcomes, and that clear rules and transparency make them significantly more comfortable with deployment. 72% of the UK public say that laws and regulation would increase their comfort with AI, up from 62% in 2023, according to a nationally representative survey of UK attitudes to AI published by the Ada Lovelace Institute and the Alan Turing Institute.

Academic studies in the UK show that citizens’ acceptance of AI in public decisions depends on the domain, safeguards and human oversight on offer; people distinguish sharply between, say, parking permits and immigration decisions. Other work finds that even where people will accept AI, they still prefer meaningful human involvement in sensitive contexts

Four high-profile cases make clear what can go wrong

• The UK’s A-level grading algorithm in 2020 aimed to standardise results during the pandemic, but it down-weighted many individual students based on the historical performance of their school. The public backlash was huge, fuelled by perceptions of unfairness and a lack of a clear appeal route. Research analysing the episode highlights how the public demanded justification beyond technical explainability. 

• In the Netherlands, the childcare benefits scandal (“Toeslagenaffaire”) saw thousands of families – disproportionately those with dual nationality – wrongly labelled as fraudulent and forced to repay large sums. Oversight reports found unlawful, discriminatory processing and poor transparency, culminating in the government’s resignation in January 2021. 

Australia’s Robodebt automated debt recovery programme unlawfully averaged incomes, issuing inaccurate notices to hundreds of thousands of people. A Royal Commission concluded in 2023 that the scheme produced erroneous outcomes, breached the Social Security Act, and caused widespread harm; the government later refunded hundreds of millions of dollars. 

• In US criminal justice, investigations into the COMPAS risk assessment tool triggered years of debate about measurement, bias and due process – again underscoring that accuracy metrics alone don’t answer questions of fairness or appeal. 

Each case spotlights similar systemic flaws: misaligned objectives, weak scrutiny, limited routes for challenge, and scant attention to the lived experience of people on the receiving end. None of these were just algorithmic mistakes –  they were service design failures.

Principles of citizen-centred AI

A concise set of principles translate the research and lessons above into delivery practice: 

  • Purpose before model. Start from a user-centred service goal (e.g., reduce waiting times without compromising safety) and a harm hypothesis. If an AI model does not materially improve the goal, don’t use it.
  • Proportionality and human oversight. Match the level of automation to the risk of the decision. Preserve human review for edge cases and high-impact outcomes; make escalation easy.
  • Data dignity and governance. Minimise data; document provenance; test for representativeness; apply privacy-by-design; and publish data protection impact assessments in plain language.
  • Explainability for the right audience. Combine technical interpretability with service-level explanations: what factors matter, how to contest a decision, and who is accountable.
  • Algorithmic impact assessment and public participation. Run assessments before deployment; invite affected users, civil society groups and frontline staff into scoping and testing. 
  • Observability and audit. Instrument services to detect drift, disparities and failure modes; log decisions; commission independent audits; publish summary findings and fixes.
  • Real-world evaluation. Use randomised or quasi-experimental trials where appropriate; report accuracy and error distribution across groups; measure impact on user experience, not just throughput.
  • Handrails for staff. Provide training, guidance and “safe-use” patterns for civil servants; design tools that fit existing workflows rather than forcing brittle workarounds.
  • Transparency by default. Maintain an up-to-date, searchable register of algorithmic tools and their purposes, with links to documentation and evaluation. Past reporting on patchy disclosure shows how quickly trust erodes when registers are incomplete. 

How to build it

Putting these into practice starts with a public value test. Begin with a plain-English problem statement and desired outcomes (including equity and safety). Capture potential harms and who bears them. Validate with service users and frontline staff. If you cannot articulate the benefit people will notice, pause. Next, identify where decisions are made today, who makes them, and which could be supported by predictions or summarisation. Create a ‘decision inventory; with risk classification (e.g., informational, triage, determinations). High-impact determinations need the strongest safeguards. The, run a pilot. Pick one decision point where AI can plausibly improve a measurable outcome (e.g., time to discharge summary; backlog reduction in planning document digitisation) and where human oversight is practical. Use the UK AI Exemplars model – scan, pilot, scale – so the pilot is explicitly a learning instrument, not a stealth production launch. 

Importantly, design the public experience first. Prototype the citizen journey, including: what the AI does; what the person sees; how they consent (if needed); how they appeal; and how they get a human. Write the explanations people will read; test them for clarity. OECD guidance stresses designing for accountability at this level, not only in code. Assemble a multi-disciplinary, socio-technical team. Pair data scientists with service designers, policy leads, domain experts and legal/privacy specialists. Name a service owner who is accountable for outcomes and a data steward who is accountable for inputs and monitoring. Give the team a budget line for user engagement.

If the pilot meets its outcomes with no unacceptable disparities, plan scaling with staged gates. If not, stop. The UK’s welfare sector has seen prototypes paused or dropped when they failed to prove value or robustness – evidence that not scaling is a legitimate, trust-building outcome. 

The research message

Three strands from research should inform every public sector AI brief:

Context matters – acceptance and fairness are domain-specific; what is tolerable for document triage is unacceptable for welfare eligibility or sentencing. Don’t copy patterns across domains without re-assessing risk. 

Governance and participation work – citizen participation in design and oversight improves legitimacy and surfaces harm earlier; it also produces better explanations because they’re co-written with the people who must understand them. 

Rules boost comfort – public support rises when people see clear regulation, routes for redress and independent audit. The Ada/Turing surveys quantify this effect across multiple years. 

Designing citizen-centric AI isn’t about dampening ambition; it’s about earning the permission to be ambitious where it matters. The UK’s AI Exemplars portfolio indicates how to move quickly and carefully: choose problems that matter to the public, prototype with guardrails, measure what users feel, publish what you learn, and stop when the evidence says stop. Do that consistently and you don’t just roll out models – you build services people trust, understand and choose to use.